WorldWideScience

Sample records for two-point lod scores

  1. The lod score method.

    Science.gov (United States)

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  2. LOD score exclusion analyses for candidate disease susceptibility genes using case-parents design

    Institute of Scientific and Technical Information of China (English)

    DENG Hongwen; GAO Guimin

    2006-01-01

    The focus of almost all the association studies of candidate genes is to test for their importance. We recently developed a LOD score approach that can be used to test against the importance of candidate genes for complex diseases and quantitative traits in random samples. As a complementary method to regular association analyses, our LOD score approach is powerful but still affected by the population admixture, though it is more conservative. To control the confounding effect of population heterogeneity, we develop here a LOD score exclusion analysis using case-parents design, the basic design of the transmission disequilibrium test (TDT) approach that is immune to population admixture. In the analysis, specific genetic effects and inheritance models at candidate genes can be analyzed and if a LOD score is ≤ - 2.0, the locus can be excluded from having an effect larger than that specified. Simulations show that this approach has reasonable power to exclude a candidate gene having small genetic effects if it is not a disease susceptibility locus (DSL) with sample size often employed in TDT studies. Similar to association analyses with the TDT in nuclear families, our exclusion analyses are generally not affected by population admixture. The exclusion analyses may be implemented to rule out candidate genes with no or minor genetic effects as supplemental analyses for the TDT. The utility of the approach is illustrated with an application to test the importance of vitamin D receptor (VDR) gene underlying the differential risk to osteoporosis.

  3. Hereditary spastic paraplegia: LOD-score considerations for confirmation of linkage in a heterogeneous trait

    Energy Technology Data Exchange (ETDEWEB)

    Dube, M.P.; Kibar, Z.; Rouleau, G.A. [McGill Univ., Quebec (Canada)] [and others

    1997-03-01

    Hereditary spastic paraplegia (HSP) is a degenerative disorder of the motor system, defined by progressive weakness and spasticity of the lower limbs. HSP may be inherited as an autosomal dominant (AD), autosomal recessive, or an X-linked trait. AD HSP is genetically heterogeneous, and three loci have been identified so far: SPG3 maps to chromosome 14q, SPG4 to 2p, and SPG4a to 15q. We have undertaken linkage analysis with 21 uncomplicated AD families to the three AD HSP loci. We report significant linkage for three of our families to the SPG4 locus and exclude several families by multipoint linkage. We used linkage information from several different research teams to evaluate the statistical probability of linkage to the SPG4 locus for uncomplicated AD HSP families and established the critical LOD-score value necessary for confirmation of linkage to the SPG4 locus from Bayesian statistics. In addition, we calculated the empirical P-values for the LOD scores obtained with all families with computer simulation methods. Power to detect significant linkage, as well as type I error probabilities, were evaluated. This combined analytical approach permitted conclusive linkage analyses on small to medium-size families, under the restrictions of genetic heterogeneity. 19 refs., 1 fig., 1 tab.

  4. Pragmatic Use of LOD - a Modular Approach

    DEFF Research Database (Denmark)

    Treldal, Niels; Vestergaard, Flemming; Karlshøj, Jan

    The concept of Level of Development (LOD) is a simple approach to specifying the requirements for the content of object-oriented models in a Building Information Modelling process. The concept has been implemented in many national and organization-specific variations and, in recent years, several...... and reliability of deliveries along with use-case-specific information requirements provides a pragmatic approach for a LOD concept. The proposed solution combines LOD requirement definitions with Information Delivery Manual-based use case requirements to match the specific needs identified for a LOD framework...

  5. LOD wars: The affected-sib-pair paradigm strikes back!

    Energy Technology Data Exchange (ETDEWEB)

    Farrall, M. [Wellcome Trust Centre for Human Genetics, Oxford (United Kingdom)

    1997-03-01

    In a recent letter, Greenberg et al. aired their concerns that the affected-sib-pair (ASP) approach was becoming excessively popular, owing to misconceptions and ignorance of the properties and limitations of both the ASP and the classic LOD-score approaches. As an enthusiast of using the ASP approach to map susceptibility genes for multifactorial traits, I would like to contribute a few comments and explanatory notes in defense of the ASP paradigm. 18 refs.

  6. Two-Point Fuzzy Ostrowski Type Inequalities

    Directory of Open Access Journals (Sweden)

    Muhammad Amer Latif

    2013-08-01

    Full Text Available Two-point fuzzy Ostrowski type inequalities are proved for fuzzy Hölder and fuzzy differentiable functions. The two-point fuzzy Ostrowski type inequality for M-lipshitzian mappings is also obtained. It is proved that only the two-point fuzzy Ostrowski type inequality for M-lipshitzian mappings is sharp and as a consequence generalize the two-point fuzzy Ostrowski type inequalities obtained for fuzzy differentiable functions.

  7. LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance

    Science.gov (United States)

    Ellul, C.; Altenbuchner, J.

    2013-09-01

    The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.

  8. 3D Urban Visualization with LOD Techniques

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In 3D urban visualization, large data volumes related to buildings are a major factor that limits the delivery and browsing speed in a web-based computer system. This paper proposes a new approach based on the level of detail (LOD) technique advanced in 3D visualization in computer graphics. The key idea of LOD technique is to generalize details of object surfaces without losing details for delivery and displaying objects. This technique has been successfully used in visualizing one or a few multiple objects in films and other industries. However, applying the technique to 3D urban visualization requires an effective generalization method for urban buildings. Conventional two-dimensional (2D) generalization method at different scales provides a good generalization reference for 3D urban visualization. Yet, it is difficult to determine when and where to retrieve data for displaying buildings. To solve this problem, this paper defines an imaging scale point and image scale region for judging when and where to get the right data for visualization. The results show that the average response time of view transformations is much decreased.

  9. Two Point Pade Approximants and Duality

    CERN Document Server

    Banks, Tom

    2013-01-01

    We propose the use of two point Pade approximants to find expressions valid uniformly in coupling constant for theories with both weak and strong coupling expansions. In particular, one can use these approximants in models with a strong/weak duality, when the symmetries do not determine exact expressions for some quantity.

  10. Two-point optical coherency matrix tomography.

    Science.gov (United States)

    Abouraddy, Ayman F; Kagalwala, Kumel H; Saleh, Bahaa E A

    2014-04-15

    The two-point coherence of an electromagnetic field is represented completely by a 4×4 coherency matrix G that encodes the joint polarization-spatial-field correlations. Here, we describe a systematic sequence of cascaded spatial and polarization projective measurements that are sufficient to tomographically reconstruct G--a task that, to the best of our knowledge, has not yet been realized. Our approach benefits from the correspondence between this reconstruction problem in classical optics and that of quantum state tomography for two-photon states in quantum optics. Identifying G uniquely determines all the measurable correlation characteristics of the field and, thus, lifts ambiguities that arise from reliance on traditional scalar descriptors, especially when the field's degrees of freedom are correlated or classically entangled.

  11. Enhanced LOD Concepts for Virtual 3d City Models

    Science.gov (United States)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  12. Distribution of Errors Reported by LOD2 LODStats Project [Dataset

    NARCIS (Netherlands)

    Hoekstra, R.; Groth, P.

    2013-01-01

    These files can be used to plot a distribution of error types based on the LOD2 LODStats analysis of linked data published through the datahub.io. The statistics show that many errors reported in these statistics are the result of HTTP problems (40x and 50x codes) unknown responses and connection do

  13. Two-point orientation discrimination versus the traditional two-point test for tactile spatial acuity assessment.

    Science.gov (United States)

    Tong, Jonathan; Mao, Oliver; Goldreich, Daniel

    2013-01-01

    Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal) of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants' psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological assessment.

  14. Two-point orientation discrimination versus the traditional two-point test for tactile spatial acuity assessment

    Directory of Open Access Journals (Sweden)

    Jonathan eTong

    2013-09-01

    Full Text Available Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants’ psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological

  15. Apgar Scores

    Science.gov (United States)

    ... Stages Listen Español Text Size Email Print Share Apgar Scores Page Content Article Body As soon as your ... the syringe, but is blue; her one minute Apgar score would be 8—two points off because she ...

  16. LOD First Estimates In 7406 SLR San Juan Argentina Station

    Science.gov (United States)

    Pacheco, A.; Podestá, R.; Yin, Z.; Adarvez, S.; Liu, W.; Zhao, L.; Alvis Rojas, H.; Actis, E.; Quinteros, J.; Alacoria, J.

    2015-10-01

    In this paper we show results derived from satellite observations at the San Juan SLR station of Felix Aguilar Astronomical Observatory (OAFA). The Satellite Laser Ranging (SLR) telescope was installed in early 2006, in accordance with an international cooperation agreement between the San Juan National University (UNSJ) and the Chinese Academy of Sciences (CAS). The SLR has been in successful operation since 2011 using NAOC SLR software for the data processing. This program was designed to calculate satellite orbits and station coordinates, however it was used in this work for the determination of LOD (Length Of Day) time series and Earth Rotation speed.

  17. Holographic Two-Point Functions in Conformal Gravity

    CERN Document Server

    Ghodsi, Ahmad; Naseh, Ali

    2014-01-01

    In this paper we compute the holographic two-point functions of four dimensional conformal gravity. Precisely we calculate the two-point functions for Energy-Momentum (EM) and Partially Massless Response (PMR) operators that have been identified as two response functions for two independent sources in the dual CFT. The correlation function of EM with PMR tensors turns out to be zero which is expected according to the conformal symmetry. The two-point function of EM is that of a transverse and traceless tensor, and the two-point function of PMR which is a traceless operator contains two distinct parts, one for a transverse-traceless tensor operator and another one for a vector field, both of which fulfill criteria of a CFT. We also discuss about the unitarity of the theory.

  18. Computational complexity for the two-point block method

    Science.gov (United States)

    See, Phang Pei; Majid, Zanariah Abdul

    2014-12-01

    In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

  19. Relation Between Equatorial Oceanic Activities and LOD Changes

    Institute of Scientific and Technical Information of China (English)

    郑大伟; 陈刚

    1994-01-01

    The time series of the length of day (LOD) and the observational Pacific sea level during l962.0-1990.0 are used to study the relation between Earth rotation and equatorial oceanic activities.The results show that (i) the sea level is apparently rising at an average rate of about 1.75±.01mm/a during the past 30 years,(ii) there are large-scale eastward and westward water motions in the upper equatorial Pacific zone,which,according to the dynamical analysis of the angular momentum of the large-scale sea water motion in Pacific Ocean related to the Earth rotation axis accounts for about 30% of the change in ititerannual Eatlh rotation rate; (iii) the interannual changes in Earth rotation also cause changes in the distribution of the water mass in equatorial Pacific,and affect the formation of ENSO events.Based on these results,we give a new model for the interaction between equatorial ocean and Earth rotation.

  20. Two-point functions on deformed space-time

    CERN Document Server

    Trampetic, Josip

    2014-01-01

    We present a review of one-loop photon (\\Pi) and neutrino (\\Sigma) two-point functions in a covariant and deformed U(1) gauge-theory on d-dimensional noncommutative spaces, determined by a constant antisymmetric tensor \\theta, and by a parameter-space (\\kappa_f,\\kappa_g), respectively. For the general fermion-photon S_f(\\kappa_f) and photon self-interaction S_g(\\kappa_g) the closed form results reveal two-point functions with all kind of pathological terms: the UV divergence, the quadratic UV/IR mixing terms as well as a logarithmic IR divergent term of the type ln(\\mu^2(\\theta p)^2). In addition, the photon-loop produces new tensor structures satisfying transversality condition by themselves. We show that the photon two-point function in four-dimensional Euclidean spacetime can be reduced to two finite terms by imposing a specific full rank of \\theta and setting deformation parameters (\\kappa_f,\\kappa_g)=(0,3). In this case the neutrino two-point function vanishes. Thus for a specific point (0,3) in the para...

  1. Similarity of solution branches for two-point semilinear problems

    Directory of Open Access Journals (Sweden)

    Philip Korman

    2003-02-01

    Full Text Available For semilinear autonomous two-point problems, we show that all Neumann branches and all Dirichlet branches with odd number of interior roots have the same shape. On the other hand, Dirichlet branches with even number of roots may look differently. While this result has been proved previously by Schaaf cite{S}, our approach appears to be simpler.

  2. Total Ossiculoplasty: Advantages of Two-Point Stabilization Technique

    Directory of Open Access Journals (Sweden)

    Leonard Berenholz

    2012-01-01

    Full Text Available Objective. Evaluate a porous polyethylene prosthesis with two-point stabilization in total ossiculoplasty. This approach utilizes a lateral as well as a medial graft to stabilize a total ossicular prosthesis (TOP. Study Design. Retrospective cohort review of total ossiculoplasty. Methods. All patients who underwent total ossiculoplasty during the years 2004–2007 were included in the study group. Only five patients (10% had primary surgery whereas 45 (90% underwent revision surgery. Cartilage grafts covering the prosthesis (Sheehy, Xomed laterally were used in all patients with areolar tissue being used for medial stabilization at the stapes footplate. Follow-up examination and audiometrics were performed a mean of 8.1 months following surgery. Results. The percentage of patients closing their ABG to within 10 dB was 44% with 66% closing their ABG to within 20 dB. The mean four-frequency hearing gain was 15.7 dB. The mean postoperative ABG was 15.7 dB. Conclusion. Audiometric results following total ossiculoplasty surgery using two-point stabilization exceeded results from the otologic literature. Proper two-point fixation with areolar tissue and stabilization utilizing cartilage were the keys to achieving a relatively high percentage of success in chronic ear disease in this sample.

  3. Theory of resistor networks: the two-point resistance

    Energy Technology Data Exchange (ETDEWEB)

    Wu, F Y [Department of Physics, Northeastern University Boston, MA 02115 (United States)

    2004-07-02

    The resistance between two arbitrary nodes in a resistor network is obtained in terms of the eigenvalues and eigenfunctions of the Laplacian matrix associated with the network. Explicit formulae for two-point resistances are deduced for regular lattices in one, two and three dimensions under various boundary conditions including that of a Moebius strip and a Klein bottle. The emphasis is on lattices of finite sizes. We also deduce summation and product identities which can be used to analyse large-size expansions in two and higher dimensions.

  4. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    Science.gov (United States)

    Marcori, Oton H.; Pereira, Thiago S.

    2017-02-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N-point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  5. Atmospheric and Oceanic Excitations to LOD Change on Quasi-biennial Time Scales

    Institute of Scientific and Technical Information of China (English)

    Li-Hua Ma; De-Chun Liao; Yan-Ben Han

    2006-01-01

    We use wavelet transform to study the time series of the Earth's rotation rate (length-of-day, LOD), the axial components of atmospheric angular momentum (AAM) and oceanic angular momentum (OAM) in the period 1962-2005, and discuss the quasi-biennial oscillations (QBO) of LOD change. The results show that the QBO of LOD change varies remarkably in amplitude and phase. It was weak before 1978, then became much stronger and reached maximum values during the strong El Nino events in around 1983 and 1997. Results from analyzing the axial AAM indicate that the QBO signals in axial AAM are extremely consistent with the QBOs of LOD change. During 1963-2003, the QBO variance in the axial AAM can explain about 99.0% of that of the LOD, in other words, all QBO signals of LOD change are almost excited by the axial AAM, while the weak QBO signals of the axial OAM are quite different from those of the LOD and the axial AAM in both time-dependent characteristics and magnitudes. The combined effects of the axial AAM and OAM can explain about 99.1% of the variance of QBO in LOD change during this period.

  6. Automatic repair of CityGML LOD2 buildings using shrink-wrapping

    NARCIS (Netherlands)

    Zhao, Z.; Ledoux, H.; Stoter, J.E.

    2013-01-01

    The LoD2 building models defined in CityGML are widely used in 3D city applications. The underlying geometry for such models is a GML solid (without interior shells), whose boundary should be a closed 2-manifold. However, this condition is often violated in practice because of the way LoD2 models ar

  7. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele, E-mail: calcagni@aei.mpg.de, E-mail: gielen@aei.mpg.de, E-mail: doriti@aei.mpg.de [Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany)

    2011-06-21

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  8. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Oriti, Daniele [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); Gielen, Steffen [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2011-07-01

    We discuss the path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions, with particular but non-exclusive reference to loop quantum cosmology (LQC). Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  9. Two-point Correlator Fits on HISQ Ensembles

    CERN Document Server

    Bazavov, A; Bouchard, C; DeTar, C; Du, D; El-Khadra, A X; Foley, J; Freeland, E D; Gamiz, E; Gottlieb, Steven; Heller, U M; Hetrick, J E; Kim, J; Kronfeld, A S; Laiho, J; Levkova, L; Lightman, M; Mackenzie, P B; Neil, E T; Oktay, M; Simone, J N; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R

    2012-01-01

    We present our methods to fit the two point correlators for light, strange, and charmed pseudoscalar meson physics with the highly improved staggered quark (HISQ) action. We make use of the least-squares fit including the full covariance matrix of the correlators and including Gaussian constraints on some parameters. We fit the correlators on a variety of the HISQ ensembles. The lattice spacing ranges from 0.15 fm down to 0.06 fm. The light sea quark mass ranges from 0.2 times the strange quark mass down to the physical light quark mass. The HISQ ensembles also include lattices with different volumes and with unphysical values of the strange quark mass. We use the results from this work to obtain our preliminary results of $f_D$, $f_{D_s}$, $f_{D_s}/f_{D}$, and ratios of quark masses presented in another talk [1].

  10. A multiscale two-point flux-approximation method

    Science.gov (United States)

    Møyner, Olav; Lie, Knut-Andreas

    2014-10-01

    A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal-dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.

  11. Flow speed measurement using two-point collective light scattering

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeier, N.P

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au) 1 tab., 51 ills., 29 refs.

  12. The putative old, nearby cluster Lod\\'{e}n 1 does not exist

    CERN Document Server

    Han, Eunkyu; Wright, Jason T

    2016-01-01

    Astronomers have access to precious few nearby, middle-aged benchmark star clusters. Within 500 pc, there are only NGC 752 and Ruprecht 147 (R147), at 1.5 and 3 Gyr respectively. The Database for Galactic Open Clusters (WEBDA) also lists Lod\\'{e}n 1 as a 2 Gyr cluster at a distance of 360 pc. If this is true, Lod\\'{e}n 1 could become a useful benchmark cluster. This work details our investigation of Lod\\'{e}n 1. We assembled archival astrometry (PPMXL) and photometry (2MASS, Tycho-2, APASS), and acquired medium resolution spectra for radial velocity measurements with the Robert Stobie Spectrograph (RSS) at the Southern African Large Telescope. We observed no sign of a cluster main-sequence turnoff or red giant branch amongst all stars in the field brighter than $J < 11$. Considering the 29 stars identified by L.O. Lod\\'{e}n and listed on SIMBAD as the members of Lod\\'{e}n 1, we found no compelling evidence of kinematic clustering in proper motion or radial velocity. Most of these candidates are A stars and...

  13. Linked open data creating knowledge out of interlinked data : results of the LOD2 project

    CERN Document Server

    Bryl, Volha; Tramp, Sebastian

    2014-01-01

    Linked Open Data (LOD) is a pragmatic approach for realizing the Semantic Web vision of making the Web a global, distributed, semantics-based information system. This book presents an overview on the results of the research project “LOD2 -- Creating Knowledge out of Interlinked Data”. LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Program. Commencing in September 2010, this 4-year project comprised leading Linked Open Data research groups, companies, and service providers from across 11 European countries and South Korea. The aim of this project was to advance the state-of-the-art in research and development in four key areas relevant for Linked Data, namely 1. RDF data management; 2. the extraction, creation, and enrichment of structured RDF data; 3. the interlinking and fusion of Linked Data from different sources and 4. the authoring, exploration and visualization of Linked Data.

  14. An Incremental LOD Method Based on Grid and Its Application in Distributed Terrain Visualization

    Institute of Scientific and Technical Information of China (English)

    MA Zhaoting; LI Chengming; PAN Mao

    2005-01-01

    Incremental LOD can be transmitted on the network as a stream, then users on the clients can easily catch the skeleton of terrain without downloading all the data from the server.Detailed information in a local part can be added gradually when users zoom it in without redundant data transmission in this procedure.To do this, an incremental LOD method is put forward according to the regular arrangement of grid.This method applies arbitrary sized grid terrains and is not restricted to square ones with a side measuring 2 k + 1 samples.Maximum height errors are recorded when the LOD is preprocessed and it can be visualized with the geometrical Mipmaps to reduce the screen error.

  15. Improving the consistency of multi-LOD CityGML datasets by removing redundancy

    NARCIS (Netherlands)

    Biljecki, F.; Ledoux, H.; Stoter, J.E.

    2014-01-01

    The CityGML standard enables the modelling of some topological relationships, and the representation in multiple levels of detail (LODs). However, both concepts are rarely utilised in reality. In this paper we investigate the linking of corresponding geometric features across multiple representation

  16. Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes

    Institute of Scientific and Technical Information of China (English)

    Muhammad Hussain

    2009-01-01

    Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.

  17. Finite-size scaling of two-point statistics and the turbulent energy cascade generators.

    Science.gov (United States)

    Cleve, Jochen; Dziekan, Thomas; Schmiegel, Jürgen; Barndorff-Nielsen, Ole E; Pearson, Bruce R; Sreenivasan, Katepalli R; Greiner, Martin

    2005-02-01

    Within the framework of random multiplicative energy cascade models of fully developed turbulence, finite-size-scaling expressions for two-point correlators and cumulants are derived, taking into account the observationally unavoidable conversion from an ultrametric to an Euclidean two-point distance. The comparison with two-point statistics of the surrogate energy dissipation, extracted from various wind tunnel and atmospheric boundary layer records, allows an accurate deduction of multiscaling exponents and cumulants, even at moderate Reynolds numbers for which simple power-law fits are not feasible. The extracted exponents serve as input for parametric estimates of the probabilistic cascade generator. Various cascade generators are evaluated.

  18. ANIMATION STRATEGIES FOR SMOOTH TRANSFORMATIONS BETWEEN DISCRETE LODS OF 3D BUILDING MODELS

    Directory of Open Access Journals (Sweden)

    M. Kada

    2016-06-01

    Full Text Available The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  19. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    Science.gov (United States)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  20. Highly sensitive lactate biosensor by engineering chitosan/PVI-Os/CNT/LOD network nanocomposite.

    Science.gov (United States)

    Cui, Xiaoqiang; Li, Chang Ming; Zang, Jianfeng; Yu, Shucong

    2007-06-15

    A novel chitosan/PVI-Os(polyvinylimidazole-Os)/CNT(carbon nanotube)/LOD (lactate oxidase) network nanocomposite was constructed on gold electrode for detection of lactate. The composite was nanoengineered by selected matched material components and optimized composition ratio to produce a superior lactate sensor. Positively charged chitosan and PVI-Os were used as the matrix and the mediator to immobilize the negatively charged LOD and to enhance the electron transfer, respectively. CNTs were introduced as the essential component in the composite for the network nanostructure. FESEM (field emission scan electron microscopy) and electrochemical characterization demonstrated that CNT behaved as a cross-linker to network PVI and chitosan due to its nanoscaled and negative charged nature. This significantly improved the conductivity, stability and electroactivity for detection of lactate. The standard deviation of the sensor without CNT in the composite was greatly reduced from 19.6 to 4.9% by addition of CNTs. With optimized conditions the sensitivity and detection limit of the lactate sensor was 19.7 microA mM(-1)cm(-2) and 5 microM, respectively. The sensitivity was remarkably improved in comparison to the newly reported values of 0.15-3.85 microA mM(-1)cm(-2). This novel nanoengineering approach for selecting matched components to form a network nanostructure could be extended to other enzyme biosensors, and to have broad potential applications in diagnostics, life science and food analysis.

  1. CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds

    Science.gov (United States)

    Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol

    The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.

  2. Approach to the origin of turbulence on the basis of two-point kinetic theory

    Science.gov (United States)

    Tsuge, S.

    1974-01-01

    Equations for the fluctuation correlation in an incompressible shear flow are derived on the basis of kinetic theory, utilizing the two-point distribution function which obeys the BBGKY hierarchy equation truncated with the hypothesis of 'ternary' molecular chaos. The step from the molecular to the hydrodynamic description is accomplished by a moment expansion which is a two-point version of the thirteen-moment method, and which leads to a series of correlation equations, viz., the two-point counterparts of the continuity equation, the Navier-Stokes equation, etc. For almost parallel shearing flows the two-point equation is separable and reduces to two Orr-Sommerfeld equations with different physical implications.

  3. New two-point scleral-fixation technique for foldable intraocular lenses with four hollow haptics.

    Science.gov (United States)

    Liu, He-Ting; Jiang, Zheng-Xuan; Tao, Li-Ming

    2016-01-01

    The study was to report a new two-point scleral-fixation technique for foldable intraocular lenses with four haptics. Lenses were slid into the anterior chamber from a 2.8 mm corneal incision and fixed under two sclera flaps at two opposite points. The postoperative best-corrected visual acuities (BCVAs) of all patients were significantly better than their preoperative BCVA. The results demonstrate that two-point, scleral fixations of foldable, intraocular lenses might be practicable and effective.

  4. Singularity Processing Method of Microstrip Line Edge Based on LOD-FDTD

    Directory of Open Access Journals (Sweden)

    Lei Li

    2014-01-01

    Full Text Available In order to improve the performance of the accuracy and efficiency for analyzing the microstrip structure, a singularity processing method is proposed theoretically and experimentally based on the fundamental locally one-dimensional finite difference time domain (LOD-FDTD with second-order temporal accuracy (denoted as FLOD2-FDTD. The proposed method can highly improve the performance of the FLOD2-FDTD even when the conductor is embedded into more than half of the cell by the coordinate transformation. The experimental results showed that the proposed method can achieve higher accuracy when the time step size is less than or equal to 5 times of that the Courant-Friedrich-Levy (CFL condition allowed. In comparison with the previously reported methods, the proposed method for calculating electromagnetic field near microstrip line edge not only improves the efficiency, but also can provide a higher accuracy.

  5. Mistakes and Pitfalls Associated with Two-Point Compression Ultrasound for Deep Vein Thrombosis

    Directory of Open Access Journals (Sweden)

    Tony Zitek, MD

    2016-03-01

    Full Text Available Introduction: Two-point compression ultrasound is purportedly a simple and accurate means to diagnose proximal lower extremity deep vein thrombosis (DVT, but the pitfalls of this technique have not been fully elucidated. The objective of this study is to determine the accuracy of emergency medicine resident-performed two-point compression ultrasound, and to determine what technical errors are commonly made by novice ultrasonographers using this technique. Methods: This was a prospective diagnostic test assessment of a convenience sample of adult emergency department (ED patients suspected of having a lower extremity DVT. After brief training on the technique, residents performed two-point compression ultrasounds on enrolled patients. Subsequently a radiology department ultrasound was performed and used as the gold standard. Residents were instructed to save videos of their ultrasounds for technical analysis. Results: Overall, 288 two-point compression ultrasound studies were performed. There were 28 cases that were deemed to be positive for DVT by radiology ultrasound. Among these 28, 16 were identified by the residents with two-point compression. Among the 260 cases deemed to be negative for DVT by radiology ultrasound, 10 were thought to be positive by the residents using two-point compression. This led to a sensitivity of 57.1% (95% CI [38.8-75.5] and a specificity of 96.1% (95% CI [93.8-98.5] for resident-performed two-point compression ultrasound. This corresponds to a positive predictive value of 61.5% (95% CI [42.8-80.2] and a negative predictive value of 95.4% (95% CI [92.9-98.0]. The positive likelihood ratio is 14.9 (95% CI [7.5-29.5] and the negative likelihood ratio is 0.45 (95% CI [0.29-0.68]. Video analysis revealed that in four cases the resident did not identify a DVT because the thrombus was isolated to the superior femoral vein (SFV, which is not evaluated by two-point compression. Moreover, the video analysis revealed that the

  6. Gauge-fixing parameter dependence of two-point gauge variant correlation functions

    CERN Document Server

    Zhai, C

    1996-01-01

    The gauge-fixing parameter \\xi dependence of two-point gauge variant correlation functions is studied for QED and QCD. We show that, in three Euclidean dimensions, or for four-dimensional thermal gauge theories, the usual procedure of getting a general covariant gauge-fixing term by averaging over a class of covariant gauge-fixing conditions leads to a nontrivial gauge-fixing parameter dependence in gauge variant two-point correlation functions (e.g. fermion propagators). This nontrivial gauge-fixing parameter dependence modifies the large distance behavior of the two-point correlation functions by introducing additional exponentially decaying factors. These factors are the origin of the gauge dependence encountered in some perturbative evaluations of the damping rates and the static chromoelectric screening length in a general covariant gauge. To avoid this modification of the long distance behavior introduced by performing the average over a class of covariant gauge-fixing conditions, one can either choose ...

  7. Holographic two-point functions for 4d log-gravity

    CERN Document Server

    Johansson, Niklas; Zojer, Thomas

    2012-01-01

    We compute holographic one- and two-point functions of critical higher curvature gravity in four dimensions. The two most important operators are the stress tensor and its logarithmic partner, sourced by ordinary massless and by logarithmic non-normalisable gravitons, respectively. In addition, the logarithmic gravitons source two ordinary operators, one with spin-one and one with spin-zero. The one-point function of the stress tensor vanishes for all Einstein solutions, but has a non-zero contribution from logarithmic gravitons. The two-point functions of all operators match the expectations from a three-dimensional logarithmic conformal field theory.

  8. Numerical methods for stiff systems of two-point boundary value problems

    Science.gov (United States)

    Flaherty, J. E.; Omalley, R. E., Jr.

    1983-01-01

    Numerical procedures are developed for constructing asymptotic solutions of certain nonlinear singularly perturbed vector two-point boundary value problems having boundary layers at one or both endpoints. The asymptotic approximations are generated numerically and can either be used as is or to furnish a general purpose two-point boundary value code with an initial approximation and the nonuniform computational mesh needed for such problems. The procedures are applied to a model problem that has multiple solutions and to problems describing the deformation of thin nonlinear elastic beam that is resting on an elastic foundation.

  9. Verified solutions of two-point boundary value problems for nonlinear oscillators

    Science.gov (United States)

    Bünger, Florian

    Using techniques introduced by Nakao [4], Oishi [5, 6] and applied by Takayasu, Oishi, Kubo [11, 12] to certain nonlinear two-point boundary value problems (see also Rump [7], Chapter 15), we provide a numerical method for verifying the existence of weak solutions of two-point boundary value problems of the form -u″ = a(x, u) + b(x, u)u‧, 0 b are functions that fulfill some regularity properties. The numerical approximation is done by cubic spline interpolation. Finally, the method is applied to the Duffing, the van der Pol and the Toda oscillator. The rigorous numerical computations were done with INTLAB [8].

  10. Adaptation of a two-point boundary value problem solver to a vector-multiprocessor environment

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.J. (Mathematics Dept., North Carolina State Univ., Raleigh, NC (US)); Pereyra, V. (Weidlinger Associates, Los Angeles, CA (US))

    1990-05-01

    Systems of linear equations arising from finite-difference discretization of two-point boundary value problems have coefficient matrices that are sparse, with most or all of the nonzeros clustered in blocks near the main diagonal. Some efficiently vectorizable algorithms for factorizing these types of matrices and solving the corresponding linear systems are described. The relative effectiveness of the different algorithms varies according to the distribution of initial, final, and coupled end conditions. The techniques described can be extended to handle linear systems arising from other methods for two-point boundary value problems, such as multiple shooting and collocation. An application to seismic ray tracing is discussed.

  11. A Computationally Efficient Approach for Calculating Galaxy Two-Point Correlations

    CERN Document Server

    Demina, Regina; BenZvi, Segev; Hindrichs, Otto

    2016-01-01

    We develop a modification to the calculation of the two-point correlation function commonly used in the analysis of large scale structure in cosmology. An estimator of the two-point correlation function is constructed by contrasting the observed distribution of galaxies with that of a uniformly populated random catalog. Using the assumption that the distribution of random galaxies in redshift is independent of angular position allows us to replace pairwise combinatorics with fast integration over probability maps. The new method significantly reduces the computation time while simultaneously increasing the precision of the calculation.

  12. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  13. Meta-conformal invariance and the boundedness of two-point correlation functions

    Science.gov (United States)

    Henkel, Malte; Stoimenov, Stoimen

    2016-11-01

    The covariant two-point functions, derived from Ward identities in direct space, can be affected by consistency problems and can become unbounded for large time- or space-separations. This difficulty arises for several extensions of dynamical scaling, for example Schrödinger-invariance, conformal Galilei invariance or meta-conformal invariance, but not for standard ortho-conformal invariance. For meta-conformal invariance in (1+1) dimensions, which acts as a dynamical symmetry of a simple advection equation, these difficulties can be cured by going over to a dual space and an extension of these dynamical symmetries through the construction of a new generator in the Cartan sub-algebra. This provides a canonical interpretation of meta-conformally covariant two-point functions as correlators. Galilei-conformal correlators can be obtained from meta-conformal invariance through a simple contraction. In contrast, by an analogus construction, Schrödinger-covariant two-point functions are causal response functions. All these two-point functions are bounded at large separations, for sufficiently positive values of the scaling exponents.

  14. Meta-conformal invariance and the boundedness of two-point correlation functions

    CERN Document Server

    Henkel, Malte

    2016-01-01

    The covariant two-point functions, derived from Ward identities in direct space, can be affected by consistency problems and can become unbounded for large time- or space-separations. This difficulty arises for several extensions of dynamical scaling, for example Schr\\"odinger-invariance, conformal Galilei invariance or meta-conformal invariance, but not for standard ortho-conformal invariance. For meta-conformal invariance in 1+1 dimensions, these difficulties can be cured by going over to a dual space and an extension of these dynamical symmetries through the construction of a new generator in the Cartan sub-algebra. This provides a canonical interpretation of meta-conformally covariant two-point functions as correlators. Galilei-conformal correlators can be obtained from meta-conformal invariance through a simple contraction. In contrast, by an analogus construction, Schr\\"odinger-covariant two-point functions are causal response functions. All these two-point functions are bounded at large separations, fo...

  15. Holographic two-point functions for 4d log-gravity

    NARCIS (Netherlands)

    Johansson, Niklas; Naseh, Ali; Zojer, Thomas

    2012-01-01

    We compute holographic one- and two-point functions of critical higher-curvature gravity in four dimensions. The two most important operators are the stress tensor and its logarithmic partner, sourced by ordinary massless and by logarithmic non-normalisable gravitons, respectively. In addition, the

  16. Solvability for a Class of Abstract Two-Point Boundary Value Problems Derived from Optimal Control

    Directory of Open Access Journals (Sweden)

    Wang Lianwen

    2007-01-01

    Full Text Available The solvability for a class of abstract two-point boundary value problems derived from optimal control is discussed. By homotopy technique existence and uniqueness results are established under some monotonic conditions. Several examples are given to illustrate the application of the obtained results.

  17. Solvability for a Class of Abstract Two-Point Boundary Value Problems Derived from Optimal Control

    Directory of Open Access Journals (Sweden)

    Lianwen Wang

    2008-01-01

    Full Text Available The solvability for a class of abstract two-point boundary value problems derived from optimal control is discussed. By homotopy technique existence and uniqueness results are established under some monotonic conditions. Several examples are given to illustrate the application of the obtained results.

  18. Modification of the Two-Point Touch Cane Technique: A Pilot Study.

    Science.gov (United States)

    Jacobson, William H.; Ehresman, Paul

    1983-01-01

    Four blind adults were observed to determine the extent of the natural movement of their centers of gravity in relation to arc height during the two-point touch technique for long cane travel. The Ss learned and practiced a modified technique using their center of gravity as much as possible. (Author)

  19. Two-point discrimination of the upper extremities of healthy Koreans in their 20's.

    Science.gov (United States)

    Koo, Ja-Pung; Kim, Soon-Hee; An, Ho-Jung; Moon, Ok-Gon; Choi, Jung-Hyun; Yun, Young-Dae; Park, Joo-Hyun; Min, Kyoung-Ok

    2016-03-01

    [Purpose] The present study attempted to measure two-point discrimination in the upper extremities of healthy Koreans in their 20's. [Subjects and Methods] Using a three-point esthesiometer, we conducted an experiment with a group of 256 college students (128 male and 128 female), attending N University in Chonan, Republic of Korea. [Results] Females showed two-point discrimination at a shorter distance than males at the following points: (i) 5 cm above the elbow joint, the middle part, and 5 cm below the shoulder joint of the anterior upper arm; (ii) 5 cm above the elbow joint and 5 cm below the shoulder joint of the posterior upper arm; (iii) 5 cm above the front of the wrist joint of the forearm; 5 cm below the elbow joint, the palmar part of the distal interphalangeal joint of the thumb, the dorsal part of the distal interphalangeal joint of the middle and little fingers. It was also found that females showed greater two-point discrimination than males in distal regions rather than proximal regions. [Conclusion] The findings of this study will help establish normal values for two-point discrimination of upper extremities of young Koreans in their 20's.

  20. Logarithmic two-Point Correlation Functions from a z = 2 Lifshitz Model

    NARCIS (Netherlands)

    Zingg, T.

    2013-01-01

    The Einstein-Proca action is known to have asymptotically locally Lifshitz spacetimes as classical solutions. For dynamical exponent z=2, two-point correlation functions for fluctuations around such a geometry are derived analytically. It is found that the retarded correlators are stable in the sens

  1. Problem with two-point conditions for parabolic equation of second order on time

    Directory of Open Access Journals (Sweden)

    M. M. Symotyuk

    2014-12-01

    Full Text Available The  correctness of a problem with two-point conditions ontime-variable and of  Dirichlet-type conditions  on spatialcoordinates for the linear  parabolic equations with variablecoefficients are established. The metric theorem on estimationsfrom below of small denominators of the problem (the notions of Hausdorff measure is proved.

  2. Aggregation of LoD 1 building models as an optimization problem

    Science.gov (United States)

    Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.

    3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.

  3. 3D Building Modeling in LoD2 Using the CityGML Standard

    Science.gov (United States)

    Preka, D.; Doulamis, A.

    2016-10-01

    Over the last decade, scientific research has been increasingly focused on the third dimension in all fields and especially in sciences related to geographic information, the visualization of natural phenomena and the visualization of the complex urban reality. The field of 3D visualization has achieved rapid development and dynamic progress, especially in urban applications, while the technical restrictions on the use of 3D information tend to subside due to advancements in technology. A variety of 3D modeling techniques and standards has already been developed, as they gain more traction in a wide range of applications. Such a modern standard is the CityGML, which is open and allows for sharing and exchanging of 3D city models. Within the scope of this study, key issues for the 3D modeling of spatial objects and cities are considered and specifically the key elements and abilities of CityGML standard, which is used in order to produce a 3D model of 14 buildings that constitute a block at the municipality of Kaisariani, Athens, in Level of Detail 2 (LoD2), as well as the corresponding relational database. The proposed tool is based upon the 3DCityDB package in tandem with a geospatial database (PostgreSQL w/ PostGIS 2.0 extension). The latter allows for execution of complex queries regarding the spatial distribution of data. The system is implemented in order to facilitate a real-life scenario in a suburb of Athens.

  4. Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.

    Science.gov (United States)

    Petrovic, Vid; Fallon, James; Kuester, Falko

    2007-01-01

    Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.

  5. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  6. Generation of Multi-Lod 3d City Models in Citygml with the Procedural Modelling Engine RANDOM3DCITY

    Science.gov (United States)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2016-09-01

    The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is - as we discuss in this paper - well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at github.com/tudelft3d/Random3Dcity"target="_blank">http://github.com/tudelft3d/Random3Dcity.

  7. NUMERICAL SIMULATION OF TWO-POINT CONTACT BETWEEN WHEEL AND RAIL

    Institute of Scientific and Technical Information of China (English)

    Jun Zhang; Shouguang Sun; Xuesong Jin

    2009-01-01

    The elastic-plastic contact problem with rolling friction of wheel-rail is solved using the FE parametric quadratic programming method. Thus, the complex elastic-plastic contact problem can be calculated with high accuracy and efficiency, while the Hertz's hypothesis and the elastic semi-space assumption are avoided. Based on the 'one-point' contact calculation of wheel-rail, the computational model of 'two-point' contact are established and calculated when the wheel flange is close to the rail. In the case of 'two-point' contact, the changing laws of wheelrail contact are introduced and contact forces in various load cases are carefully analyzed. The main reason of wheel flange wear and rail side wear is found. Lubrication computational model of the wheel flange is constructed. Comparing with the result without lubrication, the contact force between wheel flange and rail decreases, which is beneficial for reducing the wear of wheel-rail.

  8. A NEW TWO-POINT ADAPTIVENONLINEAR APPROXIMATION METHOD FOR RELIABILITY ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    LiuShutian

    2004-01-01

    A two-point adaptive nonlinear approximation (referred to as TANA4) suitable for reliability analysis is proposed. Transformed and normalized random variables in probabilistic analysis could become negative and pose a challenge to the earlier developed two-point approximations; thus a suitable method that can address this issue is needed. In the method proposed, the nonlinearity indices of intervening variables are limited to integers. Then, on the basis of the present method, an improved sequential approximation of the limit state surface for reliability analysis is presented. With the gradient projection method, the data points for the limit state surface approximation are selected on the original limit state surface, which effectively represents the nature of the original response function. On the basis of this new approximation, the reliability is estimated using a first-order second-moment method. Various examples, including both structural and non-structural ones, are presented to show the effectiveness of the method proposed.

  9. Equal-time two-point correlation functions in Coulomb gauge Yang-Mills theory

    CERN Document Server

    Campagnari, D; Reinhardt, H; Astorga, F; Schleifenbaum, W

    2009-01-01

    We apply a new functional perturbative approach to the calculation of the equal-time two-point correlation functions and the potential between static color charges to one-loop order in Coulomb gauge Yang-Mills theory. The functional approach proceeds through a solution of the Schroedinger equation for the vacuum wave functional to order g^2 and derives the equal-time correlation functions from a functional integral representation via new diagrammatic rules. We show that the results coincide with those obtained from the usual Lagrangian functional integral approach, extract the beta function and determine the anomalous dimensions of the equal-time gluon and ghost two-point functions and the static potential under the assumption of multiplicative renormalizability to all orders.

  10. Two-point functions of conformal primary operators in $\\mathcal{N}=1$ superconformal theories

    CERN Document Server

    Li, Daliang

    2014-01-01

    In $\\mathcal{N}=1$ superconformal theories in four dimensions the two-point function of superconformal multiplets is known up to an overall constant. A superconformal multiplet contains several conformal primary operators, whose two-point function coefficients can be determined in terms of the multiplet's quantum numbers. In this paper we work out these coefficients in full generality, i.e. for superconformal multiplets that belong to any irreducible representation of the Lorentz group with arbitrary scaling dimension and R-charge. From our results we recover the known unitarity bounds, and also find all shortening conditions, even for non-unitary theories. For the purposes of our computations we have developed a Mathematica package for the efficient handling of expansions in Grassmann variables.

  11. State feedback control of surge oscillations of two-point mooring system

    Science.gov (United States)

    Mitra, R. K.; Banik, A. K.; Chatterjee, S.

    2017-01-01

    Stability analysis of surge oscillations of two-point mooring system under state feedback control with time-delay is investigated. The two-point mooring system is harmonically excited and essentially represents a strongly nonlinear Duffing oscillator. In this paper, a frequency domain based method viz. incremental harmonic balance method along with arc-length continuation technique (IHBC) is first employed to identify the primary and higher order subharmonic responses which may be present in such system. The IHBC is then reformulated in a manner to treat two-point mooring system under state feedback control with time-delay and is applied to obtain control of responses in an efficient and systematic way. The stability of uncontrolled responses for primary and higher order subharmonic oscillations is obtained by Floquet's theory using Hsu' scheme; whereas the stability of controlled responses is obtained by applying semi-discretization method for delay differential equation. The study focussed on the controlling primary, higher order subharmonics and chaotic responses by considering appropriate feedback gains and delay by way of (i) appreciable reduction of primary, subharmonic responses, (ii) exclusion of all higher order subharmonics 2T, 3T, 5T and 9T (1/n subharmonics or period-n solutions), and (iii) reduction of the extent of domain of all instability phenomena represented by various type of bifurcation of solutions, jump phenomena, chaotic responses etc. In the study, negative velocity feedback is observed to be much effective than state feedback for better controlling of surge oscillation of two-point mooring system. Also, the effect of larger gain values is investigated by an extensive parametric study for vibration control with different delay values.

  12. Beyond Kaiser bias: mildly non-linear two-point statistics of densities in distant spheres

    Science.gov (United States)

    Uhlemann, C.; Codis, S.; Kim, J.; Pichon, C.; Bernardeau, F.; Pogosyan, D.; Park, C.; L'Huillier, B.

    2017-04-01

    We present simple parameter-free analytic bias functions for the two-point correlation of densities in spheres at large separation. These bias functions generalize the so-called Kaiser bias to the mildly non-linear regime for arbitrary density contrasts and grow as b(ρ) - b(1) ∝ (1 - ρ-13/21)ρ1 + n/3 with b(1) = -4/21 - n/3 for a power-law initial spectrum with index n. We carry out the derivation in the context of large-deviation statistics while relying on the spherical collapse model. We use a logarithmic transformation that provides a saddle-point approximation that is valid for the whole range of densities and show its accuracy against the 30 Gpc cube state-of-the-art Horizon Run 4 simulation. Special configurations of two concentric spheres that allow us to identify peaks are employed to obtain the conditional bias and a proxy for the BBKS extremum correlation functions. These analytic bias functions should be used jointly with extended perturbation theory to predict two-point clustering statistics as they capture the non-linear regime of structure formation at the per cent level down to scales of about 10 Mpc h-1 at redshift 0. Conversely, the joint statistics also provide us with optimal dark matter two-point correlation estimates that can be applied either universally to all spheres or to a restricted set of biased (over- or underdense) pairs. Based on a simple fiducial survey, we show that the variance of this estimator is reduced by five times relative to the traditional sample estimator for the two-point function. Extracting more information from correlations of different types of objects should prove essential in the context of upcoming surveys like Euclid, DESI and WFIRST.

  13. A rapid and accurate two-point ray tracing method in horizontally layered velocity model

    Institute of Scientific and Technical Information of China (English)

    TIAN Yue; CHEN Xiao-fei

    2005-01-01

    A rapid and accurate method for two-point ray tracing in horizontally layered velocity model is presented in this paper. Numerical experiments show that this method provides stable and rapid convergence with high accuracies, regardless of various 1-D velocity structures, takeoff angles and epicentral distances. This two-point ray tracing method is compared with the pseudobending technique and the method advanced by Kim and Baag (2002). It turns out that the method in this paper is much more efficient and accurate than the pseudobending technique, but is only applicable to 1-D velocity model. Kim(s method is equivalent to ours for cases without large takeoff angles, but it fails to work when the takeoff angle is close to 90o. On the other hand, the method presented in this paper is applicable to cases with any takeoff angles with rapid and accurate convergence. Therefore, this method is a good choice for two-point ray tracing problems in horizontally layered velocity model and is efficient enough to be applied to a wide range of seismic problems.

  14. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  15. Calculating two-point resistances in distance-regular resistor networks

    Energy Technology Data Exchange (ETDEWEB)

    Jafarizadeh, M A [Department of Theoretical Physics and Astrophysics, University of Tabriz, Tabriz 51664 (Iran, Islamic Republic of); Sufiani, R [Department of Theoretical Physics and Astrophysics, University of Tabriz, Tabriz 51664 (Iran, Islamic Republic of); Jafarizadeh, S [Department of Electrical and computer engineering, University of Tabriz, Tabriz 51664 (Iran, Islamic Republic of)

    2007-05-11

    An algorithm for the calculation of the resistance between two arbitrary nodes in an arbitrary distance-regular resistor network is provided, where the calculation is based on stratification introduced in Jafarizadeh and Salimi (2006 J. Phys. A: Math. Gen. 39 1-29) and the Stieltjes transform of the spectral distribution (Stieltjes function) associated with the network. It is shown that the resistances between a node {alpha} and all nodes {beta} belonging to the same stratum with respect to the {alpha} (R{sub {alpha}}{sub {beta}{sup (i)}}), {beta} belonging to the ith stratum with respect to the {alpha}) are the same. Also, the analytical formulae for two-point resistances R{sub {alpha}}{sub {beta}{sup (i)}}, i=1,2,3, are given in terms of the size of the network and corresponding intersection numbers. In particular, the two-point resistances in a strongly regular network are given in terms of its parameters (v, {kappa}, {lambda}, {mu}). Moreover, the lower and upper bounds for two-point resistances in strongly regular networks are discussed.

  16. Covariant and infrared-free graviton two-point function in de Sitter spacetime. II.

    Science.gov (United States)

    Pejhan, Hamed; Rahbardehghan, Surena

    2016-11-01

    The solution to the linearized Einstein equation in de Sitter (dS) spacetime and the corresponding two-point function are explicitly written down in a gauge with two parameters "a " and "b ". The quantization procedure, independent of the choice of the coordinate system, is based on a rigorous group theoretical approach. Our result takes the form of a universal spin-two (transverse-traceless) sector and a gauge-dependent spin-zero (pure-trace) sector. Scalar equations are derived for the structure functions of each part. We show that the spin-two sector can be written as the resulting action of a second-order differential operator (the spin-two projector) on a massless minimally coupled scalar field (the spin-two structure function). The operator plays the role of a symmetric rank-2 polarization tensor and has a spacetime dependence. The calculated spin-two projector grows logarithmically with distance and also no dS-invariant solution for either structure functions exist. We show that the logarithmically growing part and the dS-breaking contribution to the spin-zero part can be dropped out, respectively, for suitable choices of parameters "a " and "b ". Considering the transverse-traceless graviton two-point function, however, shows that dS breaking is universal (cannot be gauged away). More exactly, if one wants to respect the covariance and positiveness conditions, the quantization of the dS graviton field (as for any gauge field) cannot be carried out directly in a Hilbert space and involves unphysical negative norm states. However, a suitable adaptation (Krein spaces) of the Gupta-Bleuler scheme for massless fields, based on the group theoretical approach, enables us to obtain the corresponding two-point function satisfying the conditions of locality, covariance, transversality, index symmetrizer, and tracelessness.

  17. Futures market efficiency diagnostics via temporal two-point correlations. Russian market case study

    OpenAIRE

    Mikhail Kopytin; Evgeniy Kazantsev

    2013-01-01

    Using a two-point correlation technique, we study emergence of market efficiency in the emergent Russian futures market by focusing on lagged correlations. The correlation strength of leader-follower effects in the lagged inter-market correlations on the hourly time frame is seen to be significant initially (2009-2011) but gradually goes down, as the erstwhile leader instruments -- crude oil, the USD/RUB exchange rate, and the Russian stock market index -- seem to lose the leader status. An i...

  18. Covalent docking using autodock: Two-point attractor and flexible side chain methods.

    Science.gov (United States)

    Bianco, Giulia; Forli, Stefano; Goodsell, David S; Olson, Arthur J

    2016-01-01

    We describe two methods of automated covalent docking using Autodock4: the two-point attractor method and the flexible side chain method. Both methods were applied to a training set of 20 diverse protein-ligand covalent complexes, evaluating their reliability in predicting the crystallographic pose of the ligands. The flexible side chain method performed best, recovering the pose in 75% of cases, with failures for the largest inhibitors tested. Both methods are freely available at the AutoDock website (http://autodock.scripps.edu). © 2015 The Protein Society.

  19. A Note on Reflection Positivity and the Kallen-Lehmann Representation of Two Point Correlation Functions

    CERN Document Server

    Usui, Kouta

    2012-01-01

    It will be proved that a model of lattice field theories which satisfies (A1) Hermiticity, (A2) translational invariance, (A3) reflection positivity, and (A4) polynomial boundedness of correlations, permits the Kallen-Lehmann representation of two point correlation functions with positive spectral density function. Then, we will also argue that positivity of spectral density functions is necessary for a lattice theory to satisfy conditions (A1) - (A4). As an example, a lattice overlap scalar boson model will be discussed. We will find that the overlap scalar boson violates the reflection positivity.

  20. Two-point Functions at Two Loops in Three Flavour Chiral Perturbation Theory

    CERN Document Server

    Amorós, G; Talavera, P; Amoros, Gabriel; Bijnens, Johan; Talavera, Pere

    2000-01-01

    The vector and axial-vector two-point functions are calculated to next-to-next-to-leading order in Chiral Perturbation Theory for three light flavours. We also obtain expressions at the same order for the masses, $m_\\pi^2$, $m_K^2$ and $m_\\eta^2$, and the decay constants, $F_\\pi$, $F_K$ and $F_\\eta$. We present some numerical results after a simple resonance estimate of some of the new ${\\cal O}(p^6)$ constants.

  1. Exact relation with two-point correlation functions and phenomenological approach for compressible magnetohydrodynamic turbulence.

    Science.gov (United States)

    Banerjee, Supratik; Galtier, Sébastien

    2013-01-01

    Compressible isothermal magnetohydrodynamic turbulence is analyzed under the assumption of statistical homogeneity and in the asymptotic limit of large kinetic and magnetic Reynolds numbers. Following Kolmogorov we derive an exact relation for some two-point correlation functions which generalizes the expression recently found for hydrodynamics. We show that the magnetic field brings new source and flux terms into the dynamics which may act on the inertial range similarly as a source or a sink for the mean energy transfer rate. The introduction of a uniform magnetic field simplifies significantly the exact relation for which a simple phenomenology may be given. A prediction for axisymmetric energy spectra is eventually proposed.

  2. Two-point correlators revisited: Fast and slow scales in multifield models of Inflation

    CERN Document Server

    Ghersi, José T Gálvez

    2016-01-01

    We study the structure of two-point correlators of the inflationary field fluctuations in order to improve the accuracy and efficiency of the existing spectral methods. We present a description motivated by the separation of the fast and slow evolving components of the spectrum. Our purpose is to rephrase all the relevant equations of motion in terms of slowly varying quantities. This is important in order to consider the contribution from high-frequency modes to the spectrum without affecting computational performance. The slow-roll approximation is not required to reproduce the main distinctive features in the power spectrum for each specific model of inflation.

  3. Beyond Kaiser bias: mildly non-linear two-point statistics of densities in distant spheres

    CERN Document Server

    Uhlemann, C; Kim, J; Pichon, C; Bernardeau, F; Pogosyan, D; Park, C; L'Huillier, B

    2016-01-01

    Simple parameter-free analytic bias functions for the two-point correlation of densities in spheres at large separation are presented. These bias functions generalize the so-called Kaiser bias to the mildly non-linear regime for arbitrary density contrasts. The derivation is carried out in the context of large deviation statistics while relying on the spherical collapse model. A logarithmic transformation provides a saddle approximation which is valid for the whole range of densities and shown to be accurate against the 30 Gpc cube state-of-the-art Horizon Run 4 simulation. Special configurations of two concentric spheres that allow to identify peaks are employed to obtain the conditional bias and a proxy to BBKS extrema correlation functions. These analytic bias functions should be used jointly with extended perturbation theory to predict two-point clustering statistics as they capture the non-linear regime of structure formation at the percent level down to scales of about 10 Mpc/h at redshift 0. Conversely...

  4. Solving Directly Two Point Non Linear Boundary Value Problems Using Direct Adams Moulton Method

    Directory of Open Access Journals (Sweden)

    Zanariah A. Majid

    2011-01-01

    Full Text Available Problem statement: In this study, a direct method of Adams Moulton type was developed for solving non linear two point Boundary Value Problems (BVPs directly. Most of the existence researches involving BVPs will reduced the problem to a system of first order Ordinary Differential Equations (ODEs. This approach is very well established but it obviously will enlarge the systems of first order equations. However, the direct method in this research will solved the second order BVPs directly without reducing it to first order ODEs. Approach: Lagrange interpolation polynomial was applied in the derivation of the proposed method. The method was implemented using constant step size via shooting technique in order to determine the approximated solutions. The shooting technique will employ the Newton’s method for checking the convergent of the guessing values for the next iteration. Results: Numerical results confirmed that the direct method gave better accuracy and converged faster compared to the existing method. Conclusion: The proposed direct method is suitable for solving two point non linear boundary value problems.

  5. Statistics of the two-point cross-covariance function of solar oscillations

    Science.gov (United States)

    Nagashima, Kaori; Sekii, Takashi; Gizon, Laurent; Birch, Aaron C.

    2016-09-01

    Context. The cross-covariance of solar oscillations observed at pairs of points on the solar surface is a fundamental ingredient in time-distance helioseismology. Wave travel times are extracted from the cross-covariance function and are used to infer the physical conditions in the solar interior. Aims: Understanding the statistics of the two-point cross-covariance function is a necessary step towards optimizing the measurement of travel times. Methods: By modeling stochastic solar oscillations, we evaluate the variance of the cross-covariance function as function of time-lag and distance between the two points. Results: We show that the variance of the cross-covariance is independent of both time-lag and distance in the far field, that is, when they are large compared to the coherence scales of the solar oscillations. Conclusions: The constant noise level for the cross-covariance means that the signal-to-noise ratio for the cross-covariance is proportional to the amplitude of the expectation value of the cross-covariance. This observation is important for planning data analysis efforts.

  6. Two-point concrete resistivity measurements: interfacial phenomena at the electrode-concrete contact zone

    Science.gov (United States)

    McCarter, W. J.; Taha, H. M.; Suryanto, B.; Starrs, G.

    2015-08-01

    Ac impedance spectroscopy measurements are used to critically examine the end-to-end (two-point) testing technique employed in evaluating the bulk electrical resistivity of concrete. In particular, this paper focusses on the interfacial contact region between the electrode and specimen and the influence of contacting medium and measurement frequency on the impedance response. Two-point and four-point electrode configurations were compared and modelling of the impedance response was undertaken to identify and quantify the contribution of the electrode-specimen contact region on the measured impedance. Measurements are presented in both Bode and Nyquist formats to aid interpretation. Concretes mixes conforming to BSEN206-1 and BS8500-1 were investigated which included concretes containing the supplementary cementitious materials fly ash and ground granulated blast-furnace slag. A measurement protocol is presented for the end-to-end technique in terms of test frequency and electrode-specimen contacting medium in order to minimize electrode-specimen interfacial effect and ensure correct measurement of bulk resistivity.

  7. Statistics of the two-point cross-covariance function of solar oscillations

    CERN Document Server

    Nagashima, Kaori; Gizon, Laurent; Birch, Aaron C

    2016-01-01

    Context: The cross-covariance of solar oscillations observed at pairs of points on the solar surface is a fundamental ingredient in time-distance helioseismology. Wave travel times are extracted from the cross-covariance function and are used to infer the physical conditions in the solar interior. Aims: Understanding the statistics of the two-point cross-covariance function is a necessary step towards optimizing the measurement of travel times. Methods: By modeling stochastic solar oscillations, we evaluate the variance of the cross-covariance function as function of time-lag and distance between the two points. Results: We show that the variance of the cross-covariance is independent of both time-lag and distance in the far field, i.e., when they are large compared to the coherence scales of the solar oscillations. Conclusions: The constant noise level for the cross-covariance means that the signal-to-noise ratio for the cross-covariance is proportional to the amplitude of the expectation value of the cross-...

  8. New Middle Permian palaeopteran insects from Lodève Basin in southern France (Ephemeroptera, Diaphanopterodea, Megasecoptera

    Directory of Open Access Journals (Sweden)

    Jakub Prokop

    2011-09-01

    Full Text Available Three new palaeopteran insects are described from the Middle Permian (Guadalupian of Salagou Formation in the Lodève Basin (South of France, viz. the diaphanopterodean Alexrasnitsyniidae fam. n., based on Alexrasnitsynia permiana gen. et sp. n., the Parelmoidae Permelmoa magnifica gen. et sp. n., and Lodevohymen lapeyriei gen. et sp. n. (in Megasecoptera or Diaphanopterodea, family undetermined. In addition the first record of mayflies attributed to family Syntonopteridae (Ephemeroptera is reported. These new fossils clearly demonstrate that the present knowledge of the Permian insects remains very incomplete. They also confirm that the Lodève entomofauna was highly diverse providing links to other Permian localities and also rather unique, with several families still not recorded in other contemporaneous outcrops.

  9. Logarithmic two-point correlation functions from a z=2 Lifshitz model

    Energy Technology Data Exchange (ETDEWEB)

    Zingg, T. [Institute for Theoretical Physics and Spinoza Institute, Universiteit Utrecht,Leuvenlaan 4, 3584 CE Utrecht (Netherlands)

    2014-01-21

    The Einstein-Proca action is known to have asymptotically locally Lifshitz spacetimes as classical solutions. For dynamical exponent z=2, two-point correlation functions for fluctuations around such a geometry are derived analytically. It is found that the retarded correlators are stable in the sense that all quasinormal modes are situated in the lower half-plane of complex frequencies. Correlators in the longitudinal channel exhibit features that are reminiscent of a structure usually obtained in field theories that are logarithmic, i.e. contain an indecomposable but non-diagonalizable highest weight representation. This provides further evidence for conjecturing the model at hand as a candidate for a gravity dual of a logarithmic field theory with anisotropic scaling symmetry.

  10. Inverted catenoid as a fluid membrane with two points pulled together.

    Science.gov (United States)

    Castro-Villarreal, Pavel; Guven, Jemal

    2007-07-01

    Under inversion in any (interior) point, a catenoid transforms into a deflated compact geometry which touches at two points (its poles). The catenoid is a minimal surface and, as such, is an equilibrium shape of a symmetric fluid membrane. The conformal symmetry of the Hamiltonian implies that inverted minimal surfaces are also equilibrium shapes. However, they will exhibit curvature singularities at their poles. Such singularities are the geometrical signature of the external forces required to pull the poles together. These forces will set up stresses in the inverted shapes. Tuning the force corresponds geometrically to the translation of the point of inversion. For any fixed surface area, there will be a maximum force. The associated shape is a symmetric discocyte. Lowering the external force will induce a transition from the discocyte to a cup-shaped stomatocyte.

  11. An improved iterative technique for solving nonlinear doubly singular two-point boundary value problems

    Science.gov (United States)

    Roul, Pradip

    2016-06-01

    This paper presents a new iterative technique for solving nonlinear singular two-point boundary value problems with Neumann and Robin boundary conditions. The method is based on the homotopy perturbation method and the integral equation formalism in which a recursive scheme is established for the components of the approximate series solution. This method does not involve solution of a sequence of nonlinear algebraic or transcendental equations for the unknown coefficients as in some other iterative techniques developed for singular boundary value problems. The convergence result for the proposed method is established in the paper. The method is illustrated by four numerical examples, two of which have physical significance: The first problem is an application of the reaction-diffusion process in a porous spherical catalyst and the second problem arises in the study of steady-state oxygen-diffusion in a spherical cell with Michaelis-Menten uptake kinetics.

  12. A High Performance Spread Spectrum Clock Generator Using Two-Point Modulation Scheme

    Science.gov (United States)

    Kao, Yao-Huang; Hsieh, Yi-Bin

    A new spread spectrum clock generator (SSCG) using two-point delta-sigma modulation is presented in this paper. Not only the divider is varied, but also the voltage controlled oscillator is modulated. This technique can enhance the modulation bandwidth so that the effect of EMI suppression is improved with lower order ΣΔ modulator and can simultaneously optimize the jitter and the modulation profile. In addition, the method of two-path is applied to the loop filter to reduce the capacitance value such that the total integration can be achieved. The proposed SSCG has been fabricated in a 0.35μm CMOS process. The clock of 400MHz with center spread ratios of 1.25% and 2.5% are verified. The peak EMI reduction is 19.73dB for the case of 2.5%. The size of chip area is 0.90×0.89mm2.

  13. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  14. Two-point gauge invariant quark Green's functions with polygonal phase factor lines

    CERN Document Server

    Sazdjian, H

    2013-01-01

    Polygonal lines are used for the paths of the gluon field phase factors entering in the definition of gauge invariant quark Green's functions. This allows classification of the Green's functions according to the number of segments the polygonal lines contain. Functional relations are established between Green's functions with polygonal lines with different numbers of segments. An integrodifferential equation is obtained for the quark two-point Green's function with a path along a single straight line segment where the kernels are represented by a series of Wilson loop averages along polygonal contours. The equation is exactly and analytically solved in the case of two-dimensional QCD in the large-$N_c$ limit. The solution displays generation of an infinite number of dynamical quark masses accompanied with branch point singularities that are stronger than simple poles. An approximation scheme, based on the counting of functional derivatives of Wilson loops, is proposed for the resolution of the equation in fou...

  15. Applying inversion to construct planar, rational spirals that satisfy two-point G(2) Hermite data

    CERN Document Server

    Kurnosenko, A

    2010-01-01

    A method of two-point G(2) Hermite interpolation with spirals is proposed. To construct a sought for curve, the inversion is applied to an arc of some other spiral. To illustrate the method, inversions of parabola are considered in detail. The resulting curve is 4th degree rational. The method allows the matching of a wide range of boundary conditions, including those which require an inflection. Although not all G(2) Hermite data can be matched with a spiral generated from a parabolic arc, introducing one intermediate G(2) data solves the problem. Expanding the method by involving other spirals arcs is also discussed. (C) 2009 Elsevier B.V. All rights reserved.

  16. Asymptotic behaviour of two-point functions in multi-species models

    Directory of Open Access Journals (Sweden)

    Karol K. Kozlowski

    2016-05-01

    Full Text Available We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU(3-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  17. Asymptotic behaviour of two-point functions in multi-species models

    Science.gov (United States)

    Kozlowski, Karol K.; Ragoucy, Eric

    2016-05-01

    We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU (3)-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  18. Expansion schemes for gravitational clustering: computing two-point and three-point functions

    CERN Document Server

    Valageas, P

    2007-01-01

    We describe various expansion schemes that can be used to study gravitational clustering. Obtained from the equations of motion or their path-integral formulation, they provide several perturbative expansions that are organized in different fashion or involve different partial resummations. We focus on the two-point and three-point correlation functions, but these methods also apply to all higher-order correlation and response functions. We present the general formalism, which holds for the gravitational dynamics as well as for similar models, such as the Zeldovich dynamics, that obey similar hydrodynamical equations of motion with a quadratic nonlinearity. We give our explicit analytical results up to one-loop order for the simpler Zeldovich dynamics. For the gravitational dynamics, we compare our one-loop numerical results with numerical simulations. We check that the standard perturbation theory is recovered from the path integral by expanding over Feynman's diagrams. However, the latter expansion is organ...

  19. Quantization of fluctuations in DSR: the two-point function and beyond

    CERN Document Server

    Gubitosi, Giulia; Magueijo, Joao

    2015-01-01

    We show that the two-point function of a quantum field theory with de Sitter momentum space (herein called DSR) can be expressed as the product of a standard delta function and an energy-dependent factor. This is a highly non-trivial technical result in any theory without a preferred frame. Applied to models exhibiting running of the dimensionality of space, this result is essential in proving that vacuum fluctuations are generally scale-invariant at high energies whenever there is running to two dimensions. This is equally true for theories with and without a preferred frame, with differences arising only as we consider higher order correlators. Specifically, the three-point function of DSR has a unique structure of "open triangles", as shown here.

  20. Two-point resistance of a resistor network embedded on a globe.

    Science.gov (United States)

    Tan, Zhi-Zhong; Essam, J W; Wu, F Y

    2014-07-01

    We consider the problem of two-point resistance in an (m-1) × n resistor network embedded on a globe, a geometry topologically equivalent to an m × n cobweb with its boundary collapsed into one single point. We deduce a concise formula for the resistance between any two nodes on the globe using a method of direct summation pioneered by one of us [Z.-Z. Tan, L. Zhou, and J. H. Yang, J. Phys. A: Math. Theor. 46, 195202 (2013)]. This method is contrasted with the Laplacian matrix approach formulated also by one of us [F. Y. Wu, J. Phys. A: Math. Gen. 37, 6653 (2004)], which is difficult to apply to the geometry of a globe. Our analysis gives the result in the form of a single summation.

  1. Measuring baryon acoustic oscillations with angular two-point correlation function

    CERN Document Server

    Alcaniz, Jailson S; Bernui, Armando; Carvalho, Joel C; Benetti, Micol

    2016-01-01

    The Baryon Acoustic Oscillations (BAO) imprinted a characteristic correlation length in the large-scale structure of the universe that can be used as a standard ruler for mapping out the cosmic expansion history. Here, we discuss the application of the angular two-point correlation function, $w(\\theta)$, to a sample of luminous red galaxies of the Sloan Digital Sky Survey (SDSS) and derive two new measurements of the BAO angular scale at $z = 0.235$ and $z = 0.365$. Since noise and systematics may hinder the identification of the BAO signature in the $w - \\theta$ plane, we also introduce a potential new method to localize the acoustic bump in a model-independent way. We use these new measurements along with previous data to constrain cosmological parameters of dark energy models and to derive a new estimate of the acoustic scale $r_s$.

  2. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin

    2013-09-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S\\' and R\\' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  3. Apgar score

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003402.htm Apgar score To use the sharing features on this page, ... birth. Virginia Apgar, MD (1909-1974) introduced the Apgar score in 1952. How the Test is Performed The ...

  4. Two-point L1 shortest path queries in the plane

    Directory of Open Access Journals (Sweden)

    Danny Z. Chen

    2016-12-01

    Full Text Available Let $P$ be a set of $h$ pairwise-disjoint polygonal obstacles with a total of $n$ vertices in the plane. We consider the problem of building a data structure that can quickly compute an $L_1$ shortest obstacle-avoiding path between any two  query points $s$ and $t$. Previously, a data structure of size $O(n^2\\log n$ was constructed in $O(n^2\\log^2 n$ time that answers each two-point query in $O(\\log^2 n+k$ time, i.e., the shortest path length is reported in $O(\\log^2 n$ time and an actual path is reported in additional $O(k$ time, where $k$ is the number of edges of the output path. In this paper, we build a new data structure of size $O(n+h^2 \\log h 4^{\\sqrt{\\log h}}$ in $O(n+h^2 \\log^{2}h 4^{\\sqrt{\\log h}}$ time that answers each query in $O(\\log n+k$ time. (In contrast, for the Euclidean version of this two-point query problem, the best known algorithm uses $O(n^{11}$ space to achieve an $O(\\log n+k$ query time. Further, we extend our techniques to the weighted rectilinear version in which the ``obstacles" of $P$ are rectilinear regions with ``weights" and allow $L_1$ paths to travel through them with weighted costs. Previously, a data structure of size $O(n^2\\log^2 n$ was built in $O(n^2\\log^2 n$ time that answers each query in $O(\\log^2 n+k$ time. Our new algorithm answers each query in $O(\\log n+k$ time with a data structure of size $O(n^2 \\log n 4^{\\sqrt{\\log n}}$ that is built in $O(n^2 \\log^2 n 4^{\\sqrt{\\log n}}$ time.

  5. An Attempt to Derive the epsilon Equation from a Two-Point Closure

    Science.gov (United States)

    Canuto, V. M.; Cheng, Y.; Howard, A. M.

    2010-01-01

    The goal of this paper is to derive the equation for the turbulence dissipation rate epsilon for a shear-driven flow. In 1961, Davydov used a one-point closure model to derive the epsilon equation from first principles but the final result contained undetermined terms and thus lacked predictive power. Both in 1987 and in 2001, attempts were made to derive the epsilon equation from first principles using a two-point closure, but their methods relied on a phenomenological assumption. The standard practice has thus been to employ a heuristic form of the equation that contains three empirical ingredients: two constants, c(sub 1 epsilon), and c(sub 2 epsilon), and a diffusion term D(sub epsilon) In this work, a two-point closure is employed, yielding the following results: 1) the empirical constants get replaced by c(sub 1), c(sub 2), which are now functions of Kappa and epsilon; 2) c(sub 1) and c(sub 2) are not independent because a general relation between the two that are valid for any Kappa and epsilon are derived; 3) c(sub 1), c(sub 2) become constant with values close to the empirical values c(sub 1 epsilon), c(sub epsilon 2), (i.e., homogenous flows); and 4) the empirical form of the diffusion term D(sub epsilon) is no longer needed because it gets substituted by the Kappa-epsilon dependence of c(sub 1), c(sub 2), which plays the role of the diffusion, together with the diffusion of the turbulent kinetic energy D(sub Kappa), which now enters the new equation (i.e., inhomogeneous flows). Thus, the three empirical ingredients c(sub 1 epsilon), c(sub epsilon 2), D (sub epsilon)are replaced by a single function c(sub 1)(Kappa, epsilon ) or c(sub 2)(Kappa, epsilon ), plus a D(sub Kappa)term. Three tests of the new equation for epsilon are presented: one concerning channel flow and two concerning the shear-driven planetary boundary layer (PBL).

  6. Covariant and infrared-free graviton two-point function in de Sitter spacetime II

    CERN Document Server

    Pejhan, Hamed

    2016-01-01

    The solution to the linearized Einstein equation in de Sitter (dS) spacetime and the corresponding two-point function are explicitly written down in a gauge with two parameters `$a$' and `$b$'. The quantization procedure, independent of the choice of the coordinate system, is based on a rigorous group theoretical approach. Our result takes the form of a universal spin-two (transverse-traceless) sector and a gauge-dependent spin-zero (pure-trace) sector. Scalar equations are derived for the structure functions of each part. We show that the spin-two sector can be written as the resulting action of a second-order differential operator (the spin-two projector) on a massless minimally coupled scalar field (the spin-two structure function). The operator plays the role of a symmetric rank-$2$ polarization tensor and has a spacetime dependence. The calculated spin-two projector grows logarithmically with distance and also no dS-invariant solution for either structure functions exist. We show that the logarithmically...

  7. Possible Complications of Ureteroscopy in Modern Endourological Era: Two-Point or “Scabbard” Avulsion

    Directory of Open Access Journals (Sweden)

    Andrius Gaizauskas

    2014-01-01

    Full Text Available Indication has led ureteroscopy to be a worldwide technique, with the expected appearance of multiple types of complications. Severe complications are possible including ureteral perforation or avulsion. Ureteral avulsion has been described as an upper urinary tract injury related to the action of blunt trauma, especially from traffic accidents, being the mechanism of injury, the result of an acute deceleration/acceleration movement. With the advent of endourology, that term is also applied to the extensive degloving injury resulting from a mechanism of stretching of the ureter that eventually breaks at the most weakened site, or ureteral avulsion is referred to as a discontinuation of the full thickness of the ureter. The paper presents a case report and literature review of the two-point or “scabbard” avulsion. The loss of long segment of the upper ureter, when end-to-end anastomosis is not technically feasible, presents a challenge to the urological surgeon. In the era of small calibre ureteroscopes these complications, due to growing incidence of renal stones will become more and more actual. Our message to other urologists is to know such a complication, to know the ways of treatment, and to analyse ureteroscopic signs, when to stop or pay attention.

  8. The Two-Point Correlation Function of Gamma-ray Bursts

    CERN Document Server

    Li, Ming-Hua

    2015-01-01

    In this paper, we examine the spacial distribution of gamma-ray bursts (GRBs) using a sample of 373 objects. We subdivide the GRB data into two redshift intervals over the redshift range $0two-point correlation function (2PCF), $\\xi(r)$ of the GRBs. In determining the separation distance of the GRB pairs, we consider two representative cosmological models: a cold dark matter universe plus a cosmological constant $\\Lambda$, with $(\\Omega_{{\\rm m}}, \\Omega_{{\\rm \\Lambda}})=(0.28,0.72)$ and an Einstein-de Sitter (EdS) universe, with $(\\Omega_{{\\rm m}}, \\Omega_{{\\rm \\Lambda}})=(1,0)$. We find a $z$-decreasing correlation of the GRB distribution, which is in agreement with the predictions of the current structure formation theory. We fit a power-law model $\\xi(r)=(r/r_0)^{-\\gamma}$ to the measured $\\xi(r)$ and obtain an amplitude and slope of $r_0= 1235.2 \\pm 342.6~h^{-1}$ Mpc and $\\gamma = 0.80\\pm 0.19 $ ($1\\sigma$ confidence level) over the scales $r=200$ to $10^4~h^{-1}$ Mpc. Our ...

  9. Exteroceptive aspects of nociception: insights from graphesthesia and two-point discrimination.

    Science.gov (United States)

    Mørch, Carsten Dahl; Andersen, Ole K; Quevedo, Alexandre S; Arendt-Nielsen, Lars; Coghill, Robert C

    2010-10-01

    The exteroceptive capabilities of the nociceptive system have long been thought to be considerably more limited than those of the tactile system. However, most investigations of spatio-temporal aspects of the nociceptive system have largely focused on intensity coding as consequence of spatial or temporal summation. Graphesthesia, the identification of numbers "written" on the skin, and assessment of the two-point discrimination thresholds were used to compare the exteroceptive capabilities of the tactile and nociceptive systems. Numbers were "written" on the forearm and the abdomen by tactile stimulation and by painful non-contact infrared laser heat stimulation. Subjects performed both graphesthesia tasks better than chance. The tactile graphesthesia tasks were performed with 89% (82-97%) correct responses on the forearm and 86% (79-94%) correct responses on the abdomen. Tactile graphesthesia tasks were significantly better than painful heat graphesthesia tasks that were performed with 31% (23-40%) and 44% (37-51%) correct responses on the forearm and abdomen, respectively. These findings demonstrate that the central nervous system is capable of assembling complex spatio-temporal patterns of nociceptive information from the body surface into unified mental objects with sufficient accuracy to enable behavioral discrimination.

  10. The real space clustering of galaxies in SDSS DR7: I. Two point correlation functions

    CERN Document Server

    Shi, Feng; Wang, Huiyuan; Zhang, Youcai; Mo, H J; Bosch, Frank C van den; Li, Shijie; Liu, Chengze; Lu, Yi; Tweed, Dylan; Yang, Lei

    2016-01-01

    Using a method to correct redshift space distortion (RSD) for individual galaxies, we present the measurements of real space two-point correlation functions (2PCFs) of galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7). Galaxy groups selected from the SDSS are used as proxies of dark matter halos to correct the virial motions of galaxies in dark matter halos, and to reconstruct the large-scale velocity field. We use an ensemble of mock catalogs to demonstrate the reliability of our method. Over the range $0.2 < r < 20 h^{-1}{\\rm {Mpc}}$, the 2PCF measured directly in reconstructed real space is better than the measurement error due to cosmic variance, if the reconstruction uses the correct cosmology. Applying the method to the SDSS DR7, we construct a real space version of the main galaxy catalog, which contains 396,068 galaxies in the North Galactic Cap with redshifts in the range $0.01 \\leq z \\leq 0.12$. The Sloan Great Wall, the largest known structure in the nearby Universe, is not...

  11. Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule

    Science.gov (United States)

    Sasireka, K.; Neelakantan, T. R.

    2017-07-01

    Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.

  12. Analysis of errors in the measurement of energy dissipation with two-point LDA

    Energy Technology Data Exchange (ETDEWEB)

    Ducci, A.; Yianneskis, M. [Department of Mechanical Engineering, King' s College London, Experimental and Computational Laboratory for the Analysis of Turbulence (ECLAT), London (United Kingdom)

    2005-04-01

    In the present study, an attempt has been made to identify and quantify, with a rigorous analytical approach, all possible sources of error involved in the estimation of the fluctuating velocity gradients ({partial_derivative}u{sub i}/{partial_derivative}x{sub j}){sup 2} when a two-point laser Doppler velocimetry (LDV) technique is employed. Measurements were carried out in a grid-generated turbulence flow where the local dissipation rate can be calculated from the decay of kinetic energy. An assessment of the cumulative error determined through the analysis has been made by comparing the values of the spatial gradients directly measured with the gradient estimated from the decay of kinetic energy. The main sources of error were found to be related to the length of the two control volumes and to the fitting range, as well as the function used to interpolate the correlation coefficient when the Taylor length scale (or({partial_derivative}u{sub i}/{partial_derivative}x{sub j}){sup 2}) are estimated. (orig.)

  13. Characterization of mantle convection experiments using two-point correlation functions

    Science.gov (United States)

    Puster, Peter; Jordan, Thomas H.; Hager, Bradford H.

    1995-04-01

    Snapshots of the temperature T(r, phi, t), horizontal flow velocity u(r, phi, t), and radial flow velocity w(r, phi, t) obtained from numerical convection experiments of time-dependent flows in annular cylindrical geometry are taken to be samples of stationary, rotationally invariant random fields. For such a field f(r, phi, t), the spatio-temporal two-point correlation function, C(sub ff)(r, r-prime, delta, t(sub *)), is constructed by averaging over rotational transformations of this ensemble. To assess the structural differences among mantle convection experiments we construct three spartial subfunctions of C(sub ff)(r, r-prime, delta, t(sub *)): the rms variation, sigma(sub f)(r), the radial correlation function, R(sub f)(r, r-prime), and the angular correlation function, A(sub f)(r, delta). R(sub f)(r, r-prime) and A(sub f)(r, r-prime) are symmetric about the loci r = r-prime and delta = 0, respectively, where they achieve their maximum value of unity. The falloff of R(sub f) and A(sub f) away from their symmetry can be quantified by a correlation length rho(sub f)(r) and a correlation angle alpha(sub f)(r), which we define to be the half widths of the central peaks at the correlation level 0.75. The behavior of rho(sub f) is a diagnostic of radial structure, while alpha(sub f) measures average plume width. We have used two-point correlation functions of the temperature field (T-diagnostics) and flow velocity fields (V-diagnostics) to quantify some important aspects of mantle convection experiments. We explore the dependence of different correlation diagnostics on Rayleigh number, internal heating rate, and depth- and temperature-dependent viscosity. For isoviscous flows in an annulus, we show how radial averages of sigma(sub T), rho(sub T), and alpha(sub T) scale with Rayleigh number for various internal heating rates. A break in the power-law relationship at the transition from steady to time-dependent regimes is evident for rho(sub T) and alpha(sub T) but

  14. A NEW METHOD TO CORRECT FOR FIBER COLLISIONS IN GALAXY TWO-POINT STATISTICS

    Energy Technology Data Exchange (ETDEWEB)

    Guo Hong; Zehavi, Idit [Department of Astronomy, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106 (United States); Zheng Zheng [Department of Physics and Astronomy, University of Utah, 115 South 1400 East, Salt Lake City, UT 84112 (United States)

    2012-09-10

    In fiber-fed galaxy redshift surveys, the finite size of the fiber plugs prevents two fibers from being placed too close to one another, limiting the ability to study galaxy clustering on all scales. We present a new method for correcting such fiber collision effects in galaxy clustering statistics based on spectroscopic observations. The target galaxy sample is divided into two distinct populations according to the targeting algorithm of fiber placement, one free of fiber collisions and the other consisting of collided galaxies. The clustering statistics are a combination of the contributions from these two populations. Our method makes use of observations in tile overlap regions to measure the contributions from the collided population, and to therefore recover the full clustering statistics. The method is rooted in solid theoretical ground and is tested extensively on mock galaxy catalogs. We demonstrate that our method can well recover the projected and the full three-dimensional (3D) redshift-space two-point correlation functions (2PCFs) on scales both below and above the fiber collision scale, superior to the commonly used nearest neighbor and angular correction methods. We discuss potential systematic effects in our method. The statistical correction accuracy of our method is only limited by sample variance, which scales down with (the square root of) the volume probed. For a sample similar to the final SDSS-III BOSS galaxy sample, the statistical correction error is expected to be at the level of 1% on scales {approx}0.1-30 h {sup -1} Mpc for the 2PCFs. The systematic error only occurs on small scales, caused by imperfect correction of collision multiplets, and its magnitude is expected to be smaller than 5%. Our correction method, which can be generalized to other clustering statistics as well, enables more accurate measurements of full 3D galaxy clustering on all scales with galaxy redshift surveys.

  15. Experimental Study of the Convergence of Two-Point Cross-Correlation Toward the Green's Function

    Science.gov (United States)

    Gouedard, P.; Roux, P.; Campillo, M.; Verdel, A.; Campman, X.

    2007-12-01

    It has been shown theoretically by several authors that cross-correlation of the seismic motion recorded at two points could yield the Green's Function (GF) between these points. Convergence of cross-correlations toward the GF depends on sources positions and/or the nature of the wavefield. Direct waves from an even distribution of sources can be used to retrieve the GF. On the other hand, in an inhomogeneous medium, recording the diffuse field (coda) is theoretically sufficient to retrieve the GF whatever the sources distribution is. Since none of these two conditions (even distribution of sources or a perfectly diffuse field) is satisfied in practice, the question of convergence toward the GF has to be investigated with real data. A 3D exploration survey with sources and receivers on a dense grid offers such an opportunity. We used a high- resolution survey recorded by Petroleum Development Oman in North Oman. The data have been obtained in a 1x1~km area covered with 1600 geophones located on a 25x25~m-cell grid. Records are 4-seconds long. A unique feature of this survey is that vibrators (working in the [8-120~Hz] frequency band), were located on a similar grid shifted with respect to the receiver grid by half a cell (12.5~m) in both directions. This allows us to compare estimated GF's with measured direct waves (GF's) between the geophones. The shallow subsurface is highly heterogeneous and records include seismic coda. From this dataset, we selected two receiver locations (Ra and Rb) distant from d=158~m. We used both different sets of source locations and time windows to compute the cross-correlation between these two receivers. Then we compared the derivatives of correlation functions with the actual GF measured in Rb (resp.~Ra) for a source close to Ra (resp.~Rb). By doing so, we show the actual influence of source locations and scattering (governed by the records' selected time window) on the Signal-to-Noise Ratio (SNR) of the reconstructed GF. When using

  16. Combinación de Valores de Longitud del Día (LOD) según ventanas de frecuencia

    Science.gov (United States)

    Fernández, L. I.; Arias, E. F.; Gambis, D.

    El concepto de solución combinada se sustenta en el hecho de que las diferentes series temporales de datos derivadas a partir de distintas técnicas de la Geodesia Espacial son muy disimiles entre si. Las principales diferencias, fácilmente detectables, entre las distintas series son: diferente intervalo de muestreo, extensión temporal y calidad. Los datos cubren un período reciente de 27 meses (julio 96-oct. 98). Se utilizaron estimaciones de la longitud del día (LOD) originadas en 10 centros operativos del IERS (International Earth Rotation Service) a partir de las técnicas GPS (Global Positioning System) y SLR (Satellite Laser Ranging). La serie temporal combinada así obtenida se comparó con la solución EOP (Parámetros de la Orientación Terrestre) combinada multi-técnica derivada por el IERS (C04). El comportamiento del ruido en LOD para todas las técnicas mostró ser dependiente de la frecuencia (Vondrak, 1998). Por esto, las series dato se dividieron en ventanas de frecuencia, luego de haberles removido bies y tendencias. Luego, se asignaron diferentes factores de peso a cada ventana discriminando por técnicas. Finalmente estas soluciones parcialmente combinadas se mezclaron para obtener la solución combinada final. Sabemos que la mejor solución combinada tendrá una precisión menor que la precisión de las series temporales de datos que la originaron. Aun así, la importancia de una serie combinada confiable de EOP, esto es, de una precisión aceptable y libre de sistematismos evidentes, radica en la necesidad de una base de datos EOP de referencia para el estudio de fenómenos geofísicos que motivan variaciones en la rotación terrestre.

  17. Using Parameters of Dynamic Pulse Function for 3d Modeling in LOD3 Based on Random Textures

    Science.gov (United States)

    Alizadehashrafi, B.

    2015-12-01

    The pulse function (PF) is a technique based on procedural preprocessing system to generate a computerized virtual photo of the façade with in a fixed size square(Alizadehashrafi et al., 2009, Musliman et al., 2010). Dynamic Pulse Function (DPF) is an enhanced version of PF which can create the final photo, proportional to real geometry. This can avoid distortion while projecting the computerized photo on the generated 3D model(Alizadehashrafi and Rahman, 2013). The challenging issue that might be handled for having 3D model in LoD3 rather than LOD2, is the final aim that have been achieved in this paper. In the technique based DPF the geometries of the windows and doors are saved in an XML file schema which does not have any connections with the 3D model in LoD2 and CityGML format. In this research the parameters of Dynamic Pulse Functions are utilized via Ruby programming language in SketchUp Trimble to generate (exact position and deepness) the windows and doors automatically in LoD3 based on the same concept of DPF. The advantage of this technique is automatic generation of huge number of similar geometries e.g. windows by utilizing parameters of DPF along with defining entities and window layers. In case of converting the SKP file to CityGML via FME software or CityGML plugins the 3D model contains the semantic database about the entities and window layers which can connect the CityGML to MySQL(Alizadehashrafi and Baig, 2014). The concept behind DPF, is to use logical operations to project the texture on the background image which is dynamically proportional to real geometry. The process of projection is based on two vertical and horizontal dynamic pulses starting from upper-left corner of the background wall in down and right directions respectively based on image coordinate system. The logical one/zero on the intersections of two vertical and horizontal dynamic pulses projects/does not project the texture on the background image. It is possible to define

  18. Score Correlation

    OpenAIRE

    Fabián, Z. (Zdeněk)

    2010-01-01

    In this paper, we study a distribution-dependent correlation coefficient based on the concept of scalar score. This new measure of association of continuous random variables is compared by means of simulation experiments with the Pearson, Kendall and Spearman correlation coefficients.

  19. Two-point discrimination of the upper extremities of healthy Koreans in their 20’s

    Science.gov (United States)

    Koo, Ja-Pung; Kim, Soon-Hee; An, Ho-Jung; Moon, Ok-Gon; Choi, Jung-Hyun; Yun, Young-Dae; Park, Joo-Hyun; Min, Kyoung-Ok

    2016-01-01

    [Purpose] The present study attempted to measure two-point discrimination in the upper extremities of healthy Koreans in their 20’s. [Subjects and Methods] Using a three-point esthesiometer, we conducted an experiment with a group of 256 college students (128 male and 128 female), attending N University in Chonan, Republic of Korea. [Results] Females showed two-point discrimination at a shorter distance than males at the following points: (i) 5 cm above the elbow joint, the middle part, and 5 cm below the shoulder joint of the anterior upper arm; (ii) 5 cm above the elbow joint and 5 cm below the shoulder joint of the posterior upper arm; (iii) 5 cm above the front of the wrist joint of the forearm; 5 cm below the elbow joint, the palmar part of the distal interphalangeal joint of the thumb, the dorsal part of the distal interphalangeal joint of the middle and little fingers. It was also found that females showed greater two-point discrimination than males in distal regions rather than proximal regions. [Conclusion] The findings of this study will help establish normal values for two-point discrimination of upper extremities of young Koreans in their 20’s. PMID:27134375

  20. Two-point boundary value problems and exact controllability for several kinds of linear and nonlinear wave equations

    Energy Technology Data Exchange (ETDEWEB)

    Kong Dexing [Department of Mathematics, Zhejiang University, Hangzhou 310027 (China); Sun Qingyou, E-mail: qysun@cms.zju.edu.cn [Center of Mathematical Sciences, Zhejiang University, Hangzhou 310027 (China)

    2011-04-01

    All articles must In this paper we introduce some new concepts for second-order hyperbolic equations: two-point boundary value problem, global exact controllability and exact controllability. For several kinds of important linear and nonlinear wave equations arising from physics and geometry, we prove the existence of smooth solutions of the two-point boundary value problems and show the global exact controllability of these wave equations. In particular, we investigate the two-point boundary value problem for one-dimensional wave equation defined on a closed curve and prove the existence of smooth solution which implies the exact controllability of this kind of wave equation. Furthermore, based on this, we study the two-point boundary value problems for the wave equation defined on a strip with Dirichlet or Neumann boundary conditions and show that the equation still possesses the exact controllability in these cases. Finally, as an application, we introduce the hyperbolic curvature flow and obtain a result analogous to the well-known theorem of Gage and Hamilton for the curvature flow of plane curves.

  1. Critical two-point functions and the lace expansion for spread-out high-dimensional percolation and related models

    NARCIS (Netherlands)

    Van der Hofstad, R.; Hara, T.; Slade, G.

    2003-01-01

    We consider spread-out models of self-avoiding walk, bond percolation, lattice trees and bond lattice animals on ${\\mathbb{Z}^d}$, having long finite-range connections, above their upper critical dimensions $d=4$ (self-avoiding walk), $d=6$ (percolation) and $d=8$ (trees and animals). The two-point

  2. EXISTENCE OF POSITIVE SOLUTION TO TWO-POINT BOUNDARY VALUE PROBLEM FOR A SYSTEM OF SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we consider a two-point boundary value problem for a system of second order ordinary differential equations. Under some conditions, we show the existence of positive solution to the system of second order ordinary differential equa-tions.

  3. HIGH ACCURACY FINITE VOLUME ELEMENT METHOD FOR TWO-POINT BOUNDARY VALUE PROBLEM OF SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    王同科

    2002-01-01

    In this paper, a high accuracy finite volume element method is presented for two-point boundary value problem of second order ordinary differential equation, which differs fromthe high order generalized difference methods. It is proved that the method has optimal order er-ror estimate O(h3) in H1 norm. Finally, two examples show that the method is effective.

  4. On a two-point boundary value problem for second-order differential inclusions on Riemannian manifolds

    Directory of Open Access Journals (Sweden)

    Andrei V. Obukhovskiĭ

    2003-05-01

    Full Text Available We consider second-order differential inclusions on a Riemannian manifold with lower semicontinuous right-hand sides. Several existence theorems for solutions of two-point boundary value problem are proved to be interpreted as controllability of special mechanical systems with control on nonlinear configuration spaces. As an application, a statement of controllability under extreme values of controlling force is obtained.

  5. Extension of normal values on sensory function for facial areas using clinical tests on touch and two-point discrimination.

    Science.gov (United States)

    Vriens, J P M; van der Glas, H W

    2009-11-01

    The threshold value of a sensory test provides a numerical measure of the sensory function. In order to decide whether a threshold value from an affected site indicates 'abnormal' sensory function, it can be compared with normal values from a healthy control population. The aim of this study was to extend current information on normal values for static light touch and static two-point discrimination for facial sites. Using simple hand-held devices, 95% upper limits of confidence intervals of threshold values were determined for facial sites other than those studied previously and for a large sample of 100 healthy subjects. The MacKinnon-Dellon Disk-Criminator and the Aesthesiometer were used to measure novel normal values of two-point discrimination. As threshold values for two-point discrimination from the Aesthesiometer were similar to those obtained using the Disk-Criminator, the use of the Aesthesiometer might not be indicated. Apart from the Pressure Specified Sensory Device (a device with pressure control), Semmes-Weinstein nylon monofilaments and the Disk-Criminator are useful devices for studying sensory function, in particular under clinical test conditions in which easy and fast application are advantageous.

  6. Rigid internal fixation of zygoma fractures: A comparison of two-point and three-point fixation

    Directory of Open Access Journals (Sweden)

    Parashar Atul

    2007-01-01

    Full Text Available Background: Displaced fractures of the zygomatic bone can result in significant functional and aesthetic sequelae. Therefore the treatment must achieve adequate and stable reduction at fracture sites so as to restore the complex multidimensional relationship of the zygoma to the surrounding craniofacial skeleton. Many experimental biophysical studies have compared stability of zygoma after one, two and three-point fixation with mini plates. We conducted a prospective clinical study comparing functional and aesthetic results of two-point and three-point fixation with mini plates in patients with fractures of zygoma. Materials and Methods: Twenty-two patients with isolated zygomatic fractures over a period of one year were randomly assigned into two-point and three-point fixation groups. Results of fixation were analyzed after completion of three months. This included clinical, radiological and photographic evaluation. Results: The three-point fixation group maintained better stability at fracture sites resulting in decreased incidence of dystopia and enophthalmos. This group also had better malar projection and malar height as measured radiologically, when compared with the two-point fixation group. Conclusion: We recommend three-point rigid fixation of fractured zygoma after accurate reduction so as to maintain adequate stabilization against masticatory forces during fracture healing phase.

  7. Establishing an Appropriate Level of Detail (LoD) for a Building Information Model (BIM) - West Block, Parliament Hill, Ottawa, Canada

    Science.gov (United States)

    Fai, S.; Rafeiro, J.

    2014-05-01

    In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.

  8. Fracture resistance, two point bending strength and morphological characteristics of pulpless teeth restored with fiber-reinforced composite posts

    Directory of Open Access Journals (Sweden)

    Alfredo Tibúrcio Nunes Pires

    2012-09-01

    Full Text Available Introduction: Fiber-reinforced composite posts (FRC posts have been used for tooth reinforcement after endodontic treatment. The mechanical characteristics of FRC posts can influence the clinical prognostic. Objective: The aim of this study was to evaluate the flexural strength and fracture resistance of commercially available FRC posts Material and methods: Fourteen human single-rooted premolars with completely formed apices were selected and received endodontic treatment. The specimens were divided into two groups related to the post system: i Group A – cylindrical-conical fiber-reinforced post (White post DC, FGM, and ii group B – conical fiber-reinforced post (EXACTO, Angelus. The fracture resistance was evaluated and two point bending tests were carried out. The glass fiber characteristics and the tag penetration of the luting material into the radicular dentin structure were evaluated through scanning electronic microscopy in an illustrative way. One-way ANOVA and Tukey’s HSD test (α = 0.05 were applied. Results: The values obtained for fracture resistance and two point bending test were, respectively, 399.29 N and 109.5 N for group A, and 386.25 N and 119.5 N for group B. No significant differences in strength values among the groups were found. Conclusion: There were no significant statistical differences between the two post groups regarding to fracture strength and two point bending strength. It can be concluded that the posts selected for this study performed satisfactorily in terms of mechanical properties so that they can be used for tooth reinforcement after endodontic treatment.

  9. Surgical treatment of zygomatic bone fracture using two points fixation versus three point fixation-a randomised prospective clinical trial

    Directory of Open Access Journals (Sweden)

    Rana Majeed

    2012-04-01

    Full Text Available Abstract Background The zygoma plays an important role in the facial contour for both cosmetic and functional reasons; therefore zygomatic bone injuries should be properly diagnosed and adequately treated. Comparison of various surgical approaches and their complications can only be done objectively using outcome measurements which in turn require protocol management and long-term follow up. The preference for open reduction and internal fixation of zygomatic fractures at three points has continued to grow in response to observations of inadequate results from two point and one point fixation techniques. The objectives of this study were to compare the efficacy of zygomatic bone after treatment with ORIF using 2 point fixation and ORIF using 3 point fixation and compare the outcome of two procedures. Methods 100 patients were randomly divided equally into two groups. In group A, 50 patients were treated by ORIF using two point fixation by miniplates and in group B, 50 patients were treated by ORIF using three point fixation by miniplates. They were evaluated for their complications during and after surgery with their advantages and disadvantages and the difference between the two groups was observed. Results A total of 100 fractures were sustained. We found that postoperative complication like decreased malar height and vertical dystopia was more common in those patients who were treated by two point fixation than those who were treated with three point fixation. Conclusions Based on this study open reduction and internal fixation using three point fixation by miniplates is the best available method for the treatment zygomatic bone fractures.

  10. Explicit Proof of Equivalence of Two-Point Functions in the Two Formalisms of Thermal Field Theory

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bang-Rong

    2002-01-01

    We give an explicit proof of equivalence of the two-point function to one-loop order in the two formalisms of thermal λ3 theory based on the expressions in the real-time formalism and indicate that the key point of completing the proof is to separate carefully the imaginary part of the zero-temperature loop integralfrom relevant expressions and this fact will certainly be very useful for examination of the equivalent problem of two formalisms of thermal field theory in other theories, including the one of the propagators for scalar bound states in an NJL model.

  11. Two-point estimate method for probabilistic optimal power flow computation including wind farms with correlated parameters

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xue; Cao, Jia; Du, Dajun [Shanghai Univ. (China). Key Lab. of Power Station Automation Technology

    2013-07-01

    This paper is concerned with the probabilistic optimal power flow (POPF) calculation including wind farms with correlated parameters which contains nodal injections. The two-point estimate method (2PEM) is employed to solve the POPF. Moreover, the correlation samples between nodal injections and line parameters are generated by Cholesky Factorization method. Simulation results show that 2PEM is feasible and effective to solve the POPF including wind farms with correlated parameters, while the 2PEM has higher computation precision and consume less CPU time than Monte Carlo Simulation.

  12. Doppler term in the galaxy two-point correlation function: wide-angle, velocity, Doppler lensing and cosmic acceleration effects

    OpenAIRE

    Raccanelli, Alvise; Bertacca, Daniele; Jeong, Donghui; Neyrinck, Mark C.; Szalay, Alexander S.

    2016-01-01

    We study the parity-odd part (that we shall call Doppler term) of the linear galaxy two-point correlation function that arises from wide-angle, velocity, Doppler lensing and cosmic acceleration effects. As it is important at low redshift and at large angular separations, the Doppler term is usually neglected in the current generation of galaxy surveys. For future wide-angle galaxy surveys such as Euclid, SPHEREx and SKA, however, we show that the Doppler term must be included. The effect of t...

  13. On the solution of two-point linear differential eigenvalue problems. [numerical technique with application to Orr-Sommerfeld equation

    Science.gov (United States)

    Antar, B. N.

    1976-01-01

    A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.

  14. ON THE EXISTENCE OF SOLUTION OF A NONLINEAR TWO-POINT BOUNDARY VALUE PROBLEM ARISING FROM A LIQUID METAL FLOW

    Institute of Scientific and Technical Information of China (English)

    Cheng Xiaoliang; Ying Weiting

    2005-01-01

    In this paper, we discuss the existence of solution of a nonlinear two-point boundary value problem with a positive parameter Q arising in the study of surfacetension-induced flows of a liquid metal or semiconductor. By applying the Schauder's fixed-point theorem, we prove that the problem admits a solution for 0 ≤ Q ≤ 14.306.It improves the result of 0 ≤ Q < 1 in [2] and 0 ≤ Q ≤ 13.213 in [3].

  15. Joint positioning sense, perceived force level and two-point discrimination tests of young and active elderly adults

    Directory of Open Access Journals (Sweden)

    Priscila G. Franco

    2015-08-01

    Full Text Available Background: Changes in the proprioceptive system are associated with aging. Proprioception is important to maintaining and/or recovering balance and to reducing the risk of falls.Objective:To compare the performance of young and active elderly adults in three proprioceptive tests.Method:Twenty-one active elderly participants (66.9±5.5 years and 21 healthy young participants (24.6±3.9 years were evaluated in the following tests: perception of position of the ankle and hip joints, perceived force level of the ankle joint, and two-point discrimination of the sole of the foot.Results:No differences (p>0.05 were found between groups for the joint position and perceived force level. On the other hand, the elderly participants showed lower sensitivity in the two-point discrimination (higher threshold when compared to the young participants (p < 0.01.Conclusion:Except for the cutaneous plantar sensitivity, the active elderly participants had maintained proprioception. Their physical activity status may explain similarities between groups for the joint position sense and perceived force level, however it may not be sufficient to prevent sensory degeneration with aging.

  16. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    Energy Technology Data Exchange (ETDEWEB)

    Chair, Noureddine, E-mail: n.chair@ju.edu.jo

    2012-12-15

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: Black-Right-Pointing-Pointer We obtain exact formulas for the two-point resistance of the complete graph minus N edges. Black-Right-Pointing-Pointer We obtain also the total effective resistance of this graph. Black-Right-Pointing-Pointer We modified Schwatt's formula on trigonometrical power sum to suit our computations. Black-Right-Pointing-Pointer We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. Black-Right-Pointing-Pointer The first passage and mean first passage times of the random walks have exact expressions.

  17. The covariant and infrared-free graviton two-point function in de Sitter space-time

    CERN Document Server

    Pejhan, Hamed

    2015-01-01

    In this paper, the two-point function of linearized gravitons on de Sitter (dS) space is presented. Technically, respecting the dS ambient space notation, the field equation is given by the coordinate-independent Casimir operators of the de Sitter group. Analogous to the quantization of the electromagnetic field in Minkowski space, the field equation admits gauge solutions. The notation allows to exhibit the formalism of Gupta-Bleuler triplets for the present field in exactly the same manner as it occurs for the electromagnetic field. In this regard, centering on the traceless part, the field solution is written as a product of a generalized polarization tensor and a minimally coupled massless scalar field. Then, admitting a de Sitter-invariant vacuum through the so-called "Krein Space Quantization", the de Sitter fully covariant two-point function is calculated. This function is interestingly free of pathological large distance behavior (infrared divergence). Moreover, the pure-trace part (conformal sector) ...

  18. Optimal Constraints on Local Primordial Non-Gaussianity from the Two-Point Statistics of Large-Scale Structure

    CERN Document Server

    Hamaus, Nico; Desjacques, Vincent

    2011-01-01

    One of the main signatures of primordial non-Gaussianity of the local type is a scale-dependent correction to the bias of large-scale structure tracers such as galaxies or clusters, whose amplitude depends on the bias of the tracers itself. The dominant source of noise in the power spectrum of the tracers is caused by sampling variance on large scales (where the non-Gaussian signal is strongest) and shot noise arising from their discrete nature. Recent work has argued that one can avoid sampling variance by comparing multiple tracers of different bias, and suppress shot noise by optimally weighting halos of different mass. Here we combine these ideas and investigate how well the signatures of non-Gaussian fluctuations in the primordial potential can be extracted from the two-point correlations of halos and dark matter. On the basis of large $N$-body simulations with local non-Gaussian initial conditions and their halo catalogs we perform a Fisher matrix analysis of the two-point statistics. Compared to the st...

  19. Gauge-invariant two-point correlator of energy density in deconfining SU(2) Yang-Mills thermodynamics

    CERN Document Server

    Keller, Jochen

    2008-01-01

    The thesis is considering aspects of SU(2) Yang-Mills thermodynamics in its deconfining high-temperature phase. We calculate the two-point correlation function of the energy density of the photon in a thermalized gas, at first in the conventional U(1) gauge theory, followed by a calculation, where the photon is identified with the massless gauge mode in deconfining SU(2) Yang-Mills thermodynamics. Apart from the fact, that this calculation is interesting from a technical point of view, we can consider several aspects of phenomenological relevance. Since we interpret the two-point correlator of energy density as a measure for the energy transfer, and thus for the electromagnetic interaction of microscopic objects, such as atoms immersed into a photon gas, we are able to give an explanation for the unexpected stability of cold, innergalactic clouds consisting of atomic hydrogen. Subsequently, we evaluate the spatial string tension in deconfining SU(2) Yang-Mills thermodynamics, which can be regarded as measure ...

  20. Intrinsic alignments of galaxies in the MassiveBlack-II simulation: analysis of two-point statistics

    CERN Document Server

    Tenneti, Ananth; Mandelbaum, Rachel; Di Matteo, Tiziana; Feng, Yu; Khandai, Nishikanta

    2014-01-01

    The intrinsic alignment of galaxies with the large-scale density field is an important astrophysical contaminant in upcoming weak lensing surveys whilst offering insights into galaxy formation and evolution. We present detailed measurements of the galaxy intrinsic alignments and associated ellipticity-direction (ED) and projected shape ($w_{g+}$) correlation functions for galaxies in the cosmological hydrodynamic MassiveBlack-II (MB-II) simulation. We carefully assess the effects on galaxy shapes, misalignments and two-point statistics of iterative weighted (by mass, luminosity, and color) definitions of the (reduced and unreduced) inertia tensor. We find that iterative procedures must be adopted for a reliable measurement of reduced tensor but that luminosity versus mass weighting has only negligible effects. Blue galaxies exhibit stronger misalignments and suppressed $w_{g+}$ amplitude. Both ED and $w_{g+}$ correlations increase in amplitude with subhalo mass (in the range of $10^{10} - 6.0\\times 10^{14}h^{...

  1. New Approach for Solving a Class of Doubly Singular Two-Point Boundary Value Problems Using Adomian Decomposition Method

    Directory of Open Access Journals (Sweden)

    Randhir Singh

    2012-01-01

    Full Text Available We propose two new modified recursive schemes for solving a class of doubly singular two-point boundary value problems. These schemes are based on Adomian decomposition method (ADM and new proposed integral operators. We use all the boundary conditions to derive an integral equation before establishing the recursive schemes for the solution components. Thus we develop recursive schemes without any undetermined coefficients while computing successive solution components, whereas several previous recursive schemes have done so. This modification also avoids solving a sequence of nonlinear algebraic or transcendental equations for the undetermined coefficients with multiple roots, which is required to complete calculation of the solution by several earlier modified recursion schemes using the ADM. The approximate solution is computed in the form of series with easily calculable components. The effectiveness of the proposed approach is tested by considering four examples and results are compared with previous known results.

  2. The influence of age on pressure perception of static and moving two-point discrimination in normal subjects.

    Science.gov (United States)

    Kaneko, Atsushi; Asai, Noriyoshi; Kanda, Tadashi

    2005-01-01

    The purpose of the present study was to determine the effect of age on digital pressure perception as measured by two-point discrimination (2PD) testing. The subjects were 177 normal volunteers ranging in age from 20 to 79 years. Perceptible pressure of static and moving 2PD was measured on the index finger and little finger, using the Pressure-specifying Sensory Device. The threshold of pressure perception increased significantly with advancing age in both static and moving 2PD tests. There was a marked increase in subjects older than 60 years. Pressure perception was significantly higher for static 2PD than for moving 2PD in subjects 70-79 years of age. The threshold of pressure perception for static and moving 2PD gradually increased with advancing age, and was markedly elevated in subjects older than 60 years.

  3. Transforming activity of the c-Ha-ras oncogene having two point mutations in codons 12 and 61.

    Science.gov (United States)

    Sekiya, T; Prassolov, V S; Fushimi, M; Nishimura, S

    1985-09-01

    A recombinant plasmid carrying the human c-Ha-ras gene with two point mutations in codons 12 and 61 was constructed and its transforming activity on mouse NIH 3T3 cells was compared with those of genes with a single mutation in either codon 12 or 61. Quantitative analyses revealed that the gene with two mutations had essentially the same transforming activity as the genes with single mutations. These results indicate that a single mutation of the c-Ha-ras gene in either codon 12 or 61 is sufficient to activate the gene and that neither of the two mutation sites involved in activation of the gene needs to be intact for transforming activity.

  4. Assessment of styling performance in hair gels and hair sprays by means of a new two-point stiffness test.

    Science.gov (United States)

    Hoessel, Peter; Riemann, Solveig; Knebl, Robert; Schroeder, Jens; Schuh, Gerd; Castillo, Catalina

    2010-01-01

    A new two-point bending stiffness method on flat hair strands was developed and validated after application of hair styling gels and hair styling sprays. A special mold was used to align single hair fibers after applying the formulations to the hair. The styling gels used contain different commercially available thickeners and styling polymers, e.g., carbomer, acrylates/beheneth-25 methacrylate copolymer, Polyquaternium-86, PVP, VP/VA copolymers, and VP/methacrylamide/vinylimidazole copolymer. Evaluation of hair sprays was performed after spray application on flat hair strands. Commercially available hair styling resins were used, e.g. acrylates/t-butylacrylamide copolymer, octylacrylamide/acrylates/butylaminoethyl methacrylate copolymer, and VP/VA copolymer (30:70). The new stiffness test method provided the best correlation with practically relevant sensory assessments on hair strands and a panel test in which styling gels were evaluated. However, we did not observe a correlation between the new stiffness method on flat hair strands and practical assessments in hair spray application. We postulate that different polymer/hair composites are responsible for these discrepancies. Hairs on model heads for half-side testing are spot-welded after spray application, while hairs are seam-welded in the stiffness test after alignment of single hair fibers. This alignment is necessary to achieve reproducible results.

  5. Existence of solutions of nonlinear two-point boundary value problems for 4nth-order nonlinear differential equation

    Institute of Scientific and Technical Information of China (English)

    高永馨

    2002-01-01

    Studies the existence of solutions of nonlinear two point boundary value problems for nonlinear 4n-th-order differential equation y(4n)= f( t,y,y' ,y",… ,y(4n-1) ) (a) with the boundary conditions g2i(y(2i) (a) ,y(2i+1) (a)) = 0,h2i(y(2i) (c) ,y(2i+1) (c)) = 0, (I= 0,1,…,2n - 1 ) (b) where the functions f, gi and hi are continuous with certain monotone properties. For the boundary value problems of nonlinear nth order differential equation y(n) = f(t,y,y',y",… ,y(n-1)) many results have been given at the present time. But the existence of solutions of boundary value problem (a), (b) studied in this paper has not been covered by the above researches. Moreover, the corollary of the important theorem in this paper, I.e. Existence of solutions of the boundary value problem. Y(4n) = f(t,y,y',y",… ,y(4n-1) ) a2iy(2i) (at) + a2i+1y(2i+1) (a) = b2i ,c2iy(2O ( c ) + c2i+1y(2i+1) ( c ) = d2i, ( I = 0,1 ,…2n - 1) has not been dealt with in previous works.

  6. Differences in two-point discrimination and sensory threshold in the blind between braille and text reading: a pilot study.

    Science.gov (United States)

    Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-06-01

    [Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.

  7. The redshift-space two-point correlation functions of galaxies and groups in the Nearby Optical Galaxy sample

    CERN Document Server

    Giuricin, G; Girardi, M; Mezzetti, M; Marinoni, C; Giuricin, Giuliano; Samurovic, Srdjan; Girardi, Marisa; Mezzetti, Marino; Marinoni, Christian

    2001-01-01

    We use the two-point correlation function in redshift space, $\\xi(s)$, to study the clustering of the galaxies and groups of the Nearby Optical Galaxy (NOG) sample, which is a nearly all-sky, complete, magnitude-limited sample of $\\sim$7000 bright and nearby optical galaxies. The correlation function of galaxies is well described by a power law, $\\xi(s)=(s/s_0)^{-\\gamma}$, with slope $\\gamma\\sim1.5$ and $s_0\\sim6.4 h^{-1}$Mpc (on scales $2.7 - 12 h^{-1}$Mpc), in agreement with previous results of several redshift surveys of optical galaxies. We confirm the existence of morphological segregation between early- and late-type galaxies and, in particular, we find a gradual decreasing of the strength of clustering from the S0 galaxies to the late-type spirals, on intermediate scales. Furthermore, luminous galaxies turn out to be more clustered than dim galaxies. The luminosity segregation, which is significant for both early- and late-type objects, starts to become appreciable only for galaxies brighter than $M_B\\...

  8. Attitude reconstruction of ROSETTA's Lander PHILAE using two-point magnetic field observations by ROMAP and RPC-MAG

    Science.gov (United States)

    Heinisch, Philip; Auster, Hans-Ulrich; Richter, Ingo; Hercik, David; Jurado, Eric; Garmier, Romain; Güttler, Carsten; Glassmeier, Karl-Heinz

    2016-08-01

    As part of the European Space Agency's ROSETTA Mission the Lander PHILAE touched down on comet 67P/Churyumov-Gerasimenko on November 12, 2014. The magnetic field has been measured onboard the orbiter and the lander. The orbiter's tri-axial fluxgate magnetometer RPC-MAG is one of five sensors of the ROSETTA Plasma Consortium. The lander is also equipped with a tri-axial fluxgate magnetometer as part of the ROSETTA Lander Magnetometer and Plasma-Monitor package (ROMAP). This unique setup makes a two point measurement between the two spacecrafts in a relatively small distance of less than 50 km possible. Both magnetometers were switched on during the entire descent, the initial touchdown, the bouncing between the touchdowns and after the final touchdown. We describe a method for attitude determination by correlating magnetic low-frequency waves, which was tested under different conditions and finally used to reconstruct PHILAE's attitude during descent and after landing. In these cases the attitude could be determined with an accuracy of better than ± 5 °. These results were essential not only for PHILAE operations planning but also for the analysis of the obtained scientific data, because nominal sources for this information, like solar panel currents and camera pictures could not provide sufficient information due to the unexpected landing position.

  9. The Solution of Two-Point Boundary Value Problem of a Class of Duffing-Type Systems with Non-C1 Perturbation Term

    Directory of Open Access Journals (Sweden)

    Jiang Zhengxian

    2009-01-01

    Full Text Available This paper deals with a two-point boundary value problem of a class of Duffing-type systems with non-C1 perturbation term. Several existence and uniqueness theorems were presented.

  10. 基于四叉树的动态LOD虚拟地形优化%Optimized Design for Dynamic LOD Virtual Terrain Using Quad Tree

    Institute of Scientific and Technical Information of China (English)

    邹承明; 李引; 陆苑; 陈金锐

    2009-01-01

    In this paper, we present a novel approach of optimized design for dynamic LOD virtual terrain based on quad tree. In the process of building quad tree which on the basis of multi-resolution terrain model, we firstly optimize the quad tree of a mesh respectively according to the three criteria of the quad tree, that is bounding volume omitting, face culling and projection error analysis, Then we set up a suitable node evaluation function, according to the different degrees of details to show the viewpoint moves when we are roaming the LOD terrain. To remove the unreasonable quad tree partition, we put forward an appropriate cracks elimination based on the law of segmentation and rendering in the process of LOD mode simplification. Finally after the partition of the nodes optimally, we achieve the goal of optimizing the quad tree mesh.%提出了一种基于四叉树的动态LOD优化方法.基于多分辨率地形模型的四叉树构建过程,首先针对四叉树优化采用的3个判断标准--包围体剔除、背面剔除和屏幕投影误差分别进行相应的网格优化,然后在LOD地形中漫游时,随着视点移动而呈现出的不同细节程度的需要,建立了合适的节点评价函数,并在LOD简化过程中根据节点分割和渲染的规律,提出合适的裂缝消除算法,去掉四叉树中不合理的分割,最后使得分割后的节点达到最优以完成四叉树网格优化的目的.

  11. Test Scoring [book review].

    Science.gov (United States)

    Meijer, Rob R.

    2003-01-01

    This book discusses how to obtain test scores and, in particular, how to obtain test scores from tests that consist of a combination of multiple choice and open-ended questions. The strength of the book is that scoring solutions are presented for a diversity of real world scoring problems. (SLD)

  12. A comparison between a refined two-point model for the limited tokamak SOL and self-consistent plasma turbulence simulations

    Science.gov (United States)

    Wersal, C.; Ricci, P.; Loizu, J.

    2017-04-01

    A refined two-point model is derived from the drift-reduced Braginskii equations for the limited tokamak scrape-off layer (SOL) by balancing the parallel and perpendicular transport of plasma and heat and taking into account the plasma–neutral interaction. The model estimates the electron temperature drop along a field line, from a region far from the limiter to the limiter plates. Self-consistent first-principles turbulence simulations of the SOL plasma including its interaction with neutral atoms are performed with the GBS code and compared to the refined two-point model. The refined two-point model is shown to be in very good agreement with the turbulence simulation results.

  13. On the problem of mass dependence of the two-point function of the real scalar free massive field on the light cone

    Energy Technology Data Exchange (ETDEWEB)

    Ullrich, Peter [Institut fuer Informatik, TU Muenchen, Boltzmannstrasse 3, D-85748 Garching (Germany); Werner, Ernst [Institut fuer Physik, Universitaet Regensburg, Universitaetsstrasse 31, D-93040 Regensburg (Germany)

    2006-05-19

    We investigate the generally assumed inconsistency in light cone quantum field theory that the restriction of a massive, real scalar free field to the nullplane {sigma} = {l_brace}x{sup 0} + x{sup 3} = 0{r_brace} is independent of mass (Leutwyler, Klauder and Streit 1970 Nuovo Cimento A 66 536), but the restriction of the two-point function is mass dependent (see, e.g., Nakanishi and Yamawaki 1977 Nucl. Phys. B 122 15; Yamawaki K 1997 Proc. Int. Workshop New Nonperturbative Methods and Quantization on the Light Cone (Les Houches, France) Preprint hep-th/9707141). We resolve this inconsistency by showing that the two-point function has no canonical restriction to {sigma} in the sense of distribution theory. Only the so-called tame restriction of the two-point function, which we have introduced in (Ullrich P 2004 Uniqueness in the characteristic Cauchy problem of the Klein-Gordon equation and tame restrictions of generalized functions Preprint math-ph/0408022 (submitted)) exists. Furthermore, we show that this tame restriction is indeed independent of the mass. Hence the inconsistency is induced by the erroneous assumption that the two-point function has a (canonical) restriction to {sigma}.

  14. The Application of Two-Point Touch Cane Technique to Theories of Motor Control and Learning Implications for Orientation and Mobility Training.

    Science.gov (United States)

    Croce, Ronald V.; Jacobson, William H.

    1986-01-01

    Basic behavioral processes involved in motor control based on theories of motor control and learning are outlined using the teaching of two-point touch cane technique as an application of the theories. The authors assert the importance of repetition, practice, and sufficient learning time. (Author/CL)

  15. Mode-sum construction of the covariant graviton two-point function in the Poincaré patch of de Sitter space

    Science.gov (United States)

    Fröb, Markus B.; Higuchi, Atsushi; Lima, William C. C.

    2016-06-01

    We construct the graviton two-point function for a two-parameter family of linear covariant gauges in n -dimensional de Sitter space. The construction is performed via the mode-sum method in the Bunch-Davies vacuum in the Poincaré patch, and a Fierz-Pauli mass term is introduced to regularize the infrared (IR) divergences. The resulting two-point function is de Sitter invariant and free of IR divergences in the massless limit (for a certain range of parameters), although analytic continuation with respect to the mass for the pure-gauge sector of the two-point function is necessary for this result. This general result agrees with the propagator obtained by analytic continuation from the sphere [Phys. Rev. D 34, 3670 (1986); Classical Quantum Gravity 18, 4317 (2001)]. However, if one starts with strictly zero mass theory, the IR divergences are absent only for a specific value of one of the two parameters, with the other parameter left generic. These findings agree with recent calculations in the Landau (exact) gauge [J. Math. Phys. 53, 122502 (2012)], where IR divergences do appear in the spin-two (tensor) part of the two-point function. However, we find the strength (including the sign) of the IR divergence to be different from the one found in this reference.

  16. Subgroup Balancing Propensity Score

    OpenAIRE

    DONG, JING; Zhang, Junni L; Li, Fan

    2017-01-01

    We investigate the estimation of subgroup treatment effects with observational data. Existing propensity score matching and weighting methods are mostly developed for estimating overall treatment effect. Although the true propensity score should balance covariates for the subgroup populations, the estimated propensity score may not balance covariates for the subgroup samples. We propose the subgroup balancing propensity score (SBPS) method, which selects, for each subgroup, to use either the ...

  17. Characterizing the relative role of low-frequency and turbulent processes in the nocturnal boundary layer through the analysis of two-point correlations of the wind components

    Science.gov (United States)

    Teichrieb, Claudio A.; Acevedo, Otávio C.; Degrazia, Gervásio A.; Moraes, Osvaldo L. L.; Roberti, Débora R.; Zimermann, Hans R.; Santos, Daniel M.; Alves, Rita C. M.

    2013-03-01

    The study presents an analysis of two-point correlations between time series of nocturnal atmospheric wind, obtained from two micrometeorological towers, 45 m horizontally apart, each equipped with two sonic anemometers, 2.5 m vertically apart. It focuses on the scale dependence of the two-point correlations obtained from sensors vertically and horizontally separated. In particular, the role of low-frequency non-turbulent processes in the correlations is assessed, and compared to that of the turbulent scales of motion. The vertical correlations of the streamwise and vertical wind components show little dependence on the turbulence intensity, but those of the spanwise component decrease appreciably as it gets more turbulent. Multiresolution decomposition shows that the two-point correlations become increasingly dominated by low-frequency scales as it gets less turbulent, and that such large-scale processes are largely reduced in fully turbulent conditions. It is also shown that the vertical correlations of the spanwise wind component is negative for very small time scales. Horizontal two-point correlations obtained at the 45 m separation distance between the towers are almost entirely dominated by low-frequency motions, regardless of the turbulence intensity, but the magnitude of such correlations decreases with increasing turbulence intensity for any wind components. A comparison between the horizontal two-point correlations and autocorrelations taken with a time lag given by the ratio of the horizontal separation to the mean wind component in the direction that connects the two towers leads to the conclusion that the statistical properties of turbulence are often preserved over the horizontal distance, despite the lack of turbulence correlations for that separation.

  18. 基于LOD的自适应无裂缝地形渲染%Adaptive terrain rendering with no T-adjacent based on LOD

    Institute of Scientific and Technical Information of China (English)

    郭虎奇; 费向东; 刘小玲

    2013-01-01

    提出了一种新型三角形簇作为GPU的图元绘制单元,结合LOD技术实现了自适应的无裂缝地形渲染.该三角形簇,称为N-簇,分为8种基本类型,不同尺寸和位置的地形网格块都可以通过这8种基本类型进行缩放和平移得到.采用二叉树数据结构组织N-簇,每个二叉树节点对应一种N-簇,同时存储了N-簇的缩放及平移.结合八边形误差算法进行场景LOD的构建,避免了不同LOD层次间过滤产生的T-连接.由于大规模地形的高程数据量及纹理数据量非常庞大,不能一次性载入内存,采用四叉树数据结构分块组织高程数据和纹理数据,在程序运行时进行数据块的动态加载.实验结果表明,N-簇提高了地形三角形网格的绘制效率,同时,整个算法能自适应地进行无裂缝地形渲染,并能满足大规模地形场景实时绘制的要求.%A new kind of triangle cluster, as the render unit of GPU is proposed, combined the LOD technology, which realizes the adaptive terrain rendering with no crack. The new kind of triangle cluster, called N-cluster, has eight base types and the terrain mesh with different size and location can translated from the base types with scaling and translating. Binary tree is used to organize N-cluster, each node contains the information of N-cluster, including type, scale and translation. Octagon metric is utilized to construct LOD of terrain, which can avoid the T-adjacent between different LOD. Because of the massive data of DEM and texture data, which cannot be loaded into memory once, the quad tree is used to organize them and the data mesh is loaded into memory dynamically when running. The experimental result shows that, N-cluster improves the efficiency of terrain rendering, and the total algorithm can adaptively rendering terrain without crack, which can also meet the requirement of real-time rendering of large-scale terrain.

  19. The Apgar Score.

    Science.gov (United States)

    2015-10-01

    The Apgar score provides an accepted and convenient method for reporting the status of the newborn infant immediately after birth and the response to resuscitation if needed. The Apgar score alone cannot be considered as evidence of, or a consequence of, asphyxia; does not predict individual neonatal mortality or neurologic outcome; and should not be used for that purpose. An Apgar score assigned during resuscitation is not equivalent to a score assigned to a spontaneously breathing infant. The American Academy of Pediatrics and the American College of Obstetricians and Gynecologists encourage use of an expanded Apgar score reporting form that accounts for concurrent resuscitative interventions.

  20. Quarter-sweep Gauss-Seidel method with quadratic spline scheme applied to fourth order two-point boundary value problems

    Science.gov (United States)

    Mohd Fauzi, Norizyan Izzati; Sulaiman, Jumat

    2013-04-01

    The aim of this paper is to describe the application of Quarter-Sweep Gauss-Seidel (QSGS) iterative method using quadratic spline scheme for solving fourth order two-point linear boundary value problems. In the line to derive approximation equations, firstly the fourth order problems need to be reduced onto a system of second-order two-point boundary value problems. Then two linear systems have been constructed via discretization process by using the corresponding quarter-sweep quadratic spline approximation equations. The generated linear systems have been solved using the proposed QSGS iterative method to show the superiority over Full-Sweep Gauss-Seidel (FSGS) and Half-Sweep Gauss-Seidel (HSGS) methods. Computational results are provided to illustrate that the effectiveness of the proposed QSGS method is more superior in terms of computational time and number of iterations as compared to other tested methods.

  1. Stueckelberg massive electromagnetism in de Sitter and anti-de Sitter spacetimes: Two-point functions and renormalized stress-energy tensors

    CERN Document Server

    Belokogne, Andrei; Queva, Julien

    2016-01-01

    By considering Hadamard vacuum states, we first construct the two-point functions associated with Stueckelberg massive electromagnetism in de Sitter and anti-de Sitter spacetimes. Then, from the general formalism developed in [A. Belokogne and A. Folacci, Phys. Rev. D \\textbf{93}, 044063 (2016)], we obtain an exact analytical expression for the vacuum expectation value of the renormalized stress-energy tensor of the massive vector field propagating in these maximally symmetric spacetimes.

  2. Identifying nonlinear wave interactions in plasmas using two-point measurements a case study of Short Large Amplitude Magnetic Structures (SLAMS)

    CERN Document Server

    Dudok de Wit, T; Dunlop, M; Luehr, H

    1999-01-01

    A framework is described for estimating Linear growth rates and spectral energy transfers in turbulent wave-fields using two-point measurements. This approach, which is based on Volterra series, is applied to dual satellite data gathered in the vicinity of the Earth's bow shock, where Short Large Amplitude Magnetic Structures (SLAMS) supposedly play a leading role. The analysis attests the dynamic evolution of the SLAMS and reveals an energy cascade toward high-frequency waves.

  3. Existence of Positive Solutions for Two-Point Boundary Value Problems of Nonlinear Finite Discrete Fractional Differential Equations and Its Application

    OpenAIRE

    Caixia Guo; Jianmin Guo; Ying Gao; Shugui Kang

    2016-01-01

    This paper is concerned with the two-point boundary value problems of nonlinear finite discrete fractional differential equations. On one hand, we discuss some new properties of the Green function. On the other hand, by using the main properties of Green function and the Krasnoselskii fixed point theorem on cones, some sufficient conditions for the existence of at least one or two positive solutions for the boundary value problem are established.

  4. Mode-sum construction of the covariant graviton two-point function in the Poincar\\'e patch of de Sitter space

    CERN Document Server

    Fröb, Markus B; Lima, William C C

    2016-01-01

    We construct the graviton two-point function for a two-parameter family of linear covariant gauges in n-dimensional de Sitter space. The construction is performed via the mode-sum method in the Bunch-Davies vacuum in the Poincar\\'e patch, and a Fierz-Pauli mass term is introduced to regularize the infrared (IR) divergences. The resulting two-point function is de Sitter-invariant, and free of IR divergences in the massless limit (for a certain range of parameters) though analytic continuation with respect to the mass for the pure-gauge sector of the two-point function is necessary for this result. This general result agrees with the propagator obtained by analytic continuation from the sphere [Phys. Rev. D 34, 3670 (1986); Class. Quant. Grav. 18, 4317 (2001)]. However, if one starts with strictly zero mass theory, the IR divergences are absent only for a specific value of one of the two parameters, with the other parameter left generic. These findings agree with recent calculations in the Landau (exact) gauge ...

  5. SCORE - A DESCRIPTION.

    Science.gov (United States)

    SLACK, CHARLES W.

    REINFORCEMENT AND ROLE-REVERSAL TECHNIQUES ARE USED IN THE SCORE PROJECT, A LOW-COST PROGRAM OF DELINQUENCY PREVENTION FOR HARD-CORE TEENAGE STREET CORNER BOYS. COMMITTED TO THE BELIEF THAT THE BOYS HAVE THE POTENTIAL FOR ETHICAL BEHAVIOR, THE SCORE WORKER FOLLOWS B.F. SKINNER'S THEORY OF OPERANT CONDITIONING AND REINFORCES THE DELINQUENT'S GOOD…

  6. The planar two point algorithm

    NARCIS (Netherlands)

    O. Booij; Z. Zivkovic

    2009-01-01

    Vision-based localization, mapping and navigation is often performed by searching for corresponding image points and estimating the epipolar geometry. It is known that the possible relative poses of a camera mounted on a mobile robot that moves over a planar ground floor, has two degrees of freedom.

  7. Application of LOD Technology in Groundwater Finite Element Post-processing%LOD技术在地下水有限元后处理中的应用

    Institute of Scientific and Technical Information of China (English)

    毕振波; 郑爱勤; 崔振东

    2011-01-01

    地下水有限元后处理阶段的数据量较大,这对模开重现、网络快速传输和计算结果的实时可视化造成困难.为此,分析地下水有限元后处理中面临的主要问题和LOD技术,指出顶点元素删除法是一种适应地下水有限元后处理的有效数据模型简化方法,设计顶点删除和恢复过程中的主要数据结构,并应用DT方法对“空洞”进行局部三角剖分.实例证明该方法在地下水有限元后处理中应用的有效性.%It is difficult for the reproducing, the rapid transmission based on network and the real time visualization of calculated results of finite element models that there is a large amount of data in finite element post-processing stage in groundwater. Therefore, based on analyzing the main problems existing in the post-processing of finite element in groundwater and Level of Detail(LOD) technology, this paper argues that vertex element deletion method is a effective data model simplification techniques adapted for finite element post-processing in groundwater, the main data structures of vertex element deletion and restoring are designed, DT method is used for the local triangulation meshing in the "hole" area. The real implemented example proves the effectiveness of vertex element deletion algorithm applied to finite element post-processing in groundwater.

  8. The Bandim tuberculosis score

    DEFF Research Database (Denmark)

    Rudolf, Frauke; Joaquim, Luis Carlos; Vieira, Cesaltina

    2013-01-01

    Background: This study was carried out in Guinea-Bissau ’ s capital Bissau among inpatients and outpatients attending for tuberculosis (TB) treatment within the study area of the Bandim Health Project, a Health and Demographic Surveillance Site. Our aim was to assess the variability between 2...... physicians in performing the Bandim tuberculosis score (TBscore), a clinical severity score for pulmonary TB (PTB), and to compare it to the Karnofsky performance score (KPS). Method : From December 2008 to July 2009 we assessed the TBscore and the KPS of 100 PTB patients at inclusion in the TB cohort and...

  9. Two-point spin-1/2-spin-1/2 sl(2,bfC) conformal Kac-Moody blocks on the torus and their monodromies

    Energy Technology Data Exchange (ETDEWEB)

    Smyrnakis, J.M. [Columbia Univ., New York, NY (United States). Dept. of Mathematics

    1995-10-02

    Two issues of the SU(2) Wess-Zumino-Witten model are examined here, namely the computation of the untwisted conformal Kac-Moody blocks on the torus and their monodromy representations. Using the free field representation developed by Bernard and Felder, an integral representation of the twisted two point spin-1/2-spin-1/2 conformal Kac-Moody blocks on the torus is computed. From this, an integral representation of the untwisted blocks is computed after careful removal of infinities. Finally, the untwisted blocks are used to get a representation of the Braid Group on the torus on two strings, in terms of quantum group q-numbers. (orig.).

  10. Reporting Valid and Reliable Overall Scores and Domain Scores

    Science.gov (United States)

    Yao, Lihua

    2010-01-01

    In educational assessment, overall scores obtained by simply averaging a number of domain scores are sometimes reported. However, simply averaging the domain scores ignores the fact that different domains have different score points, that scores from those domains are related, and that at different score points the relationship between overall…

  11. Volleyball Scoring Systems.

    Science.gov (United States)

    Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun

    2002-01-01

    The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)

  12. Volleyball Scoring Systems.

    Science.gov (United States)

    Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun

    2002-01-01

    The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)

  13. Automated two-point dixon screening for the evaluation of hepatic steatosis and siderosis: comparison with R2*-relaxometry and chemical shift-based sequences

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, B.; Rauch, S.; Schocke, M.; Jaschke, W.; Kremser, C. [Medical University of Innsbruck, Department of Radiology, Innsbruck (Austria); Zoller, H. [Medical University of Innsbruck, Department of Internal Medicine, Innsbruck (Austria); Kannengiesser, S. [Siemens AG, Healthcare Sector, MR Applications Development, Erlangen (Germany); Zhong, X. [Siemens Healthcare, MR R and D Collaborations, Atlanta, GA (United States); Reiter, G. [Siemens AG, Healthcare Sector, MR R and D Collaborations, Graz (Austria)

    2015-05-01

    To evaluate the automated two-point Dixon screening sequence for the detection and estimated quantification of hepatic iron and fat compared with standard sequences as a reference. One hundred and two patients with suspected diffuse liver disease were included in this prospective study. The following MRI protocol was used: 3D-T1-weighted opposed- and in-phase gradient echo with two-point Dixon reconstruction and dual-ratio signal discrimination algorithm (''screening'' sequence); fat-saturated, multi-gradient-echo sequence with 12 echoes; gradient-echo T1 FLASH opposed- and in-phase. Bland-Altman plots were generated and correlation coefficients were calculated to compare the sequences. The screening sequence diagnosed fat in 33, iron in 35 and a combination of both in 4 patients. Correlation between R2* values of the screening sequence and the standard relaxometry was excellent (r = 0.988). A slightly lower correlation (r = 0.978) was found between the fat fraction of the screening sequence and the standard sequence. Bland-Altman revealed systematically lower R2* values obtained from the screening sequence and higher fat fraction values obtained with the standard sequence with a rather high variability in agreement. The screening sequence is a promising method with fast diagnosis of the predominant liver disease. It is capable of estimating the amount of hepatic fat and iron comparable to standard methods. (orig.)

  14. Instant MuseScore

    CERN Document Server

    Shinn, Maxwell

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Instant MuseScore is written in an easy-to follow format, packed with illustrations that will help you get started with this music composition software.This book is for musicians who would like to learn how to notate music digitally with MuseScore. Readers should already have some knowledge about musical terminology; however, no prior experience with music notation software is necessary.

  15. Dynamics of single photon transport in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system

    Science.gov (United States)

    Wang, Yuwen; Zhang, Yongyou; Zhang, Qingyun; Zou, Bingsuo; Schwingenschlogl, Udo

    2016-01-01

    We study the dynamics of an ultrafast single photon pulse in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system. We find that for any single photon input the transmissivity depends periodically on the separation between the two coupling points. For a pulse containing many plane wave components it is almost impossible to suppress transmission, especially when the width of the pulse is less than 20 times the period. In contrast to plane wave input, the waveform of the pulse can be modified by controlling the coupling between the waveguide and Jaynes-Cummings system. Tailoring of the waveform is important for single photon manipulation in quantum informatics. PMID:27653770

  16. A procedure for tuning automatic controllers with determining a second-order plant model with time delay from two points of a complex frequency response

    Science.gov (United States)

    Kuzishchin, V. F.; Petrov, S. V.

    2012-10-01

    The problem of obtaining the mathematical model of a plant in the course of adaptively tuning the operating automatic closed-loop control systems is considered. A new method is proposed for calculating the parameters of a model with four free coefficients represented by two inertial sections with a time delay. The model parameters are calculated from the data of experiments on determining two points of a plant's complex frequency response. The results from checking the performance of the method in combination with obtaining information on the plant dynamics by applying the Fourier transform to the impulse transient response of the system are presented. The PID controller is tuned using a parameter scanning algorithm with directly checking the amplitude-frequency response of the closed-loop system, using which the stability margin can be calculated and different quality criteria can be applied.

  17. Reliability of the two-point measurement of the spatial correlation length from Gaussian-shaped fluctuating signals in fusion-grade plasmas

    CERN Document Server

    Kim, Jaewook; Lampert, M; Ghim, Y -c

    2016-01-01

    A statistical method for the estimation of spatial correlation lengths of Gaussian-shaped fluctuating signals with two measurement points is examined to quantitatively evaluate its reliability (variance) and accuracy (bias error). The standard deviation of the correlation value is analytically derived for randomly distributed Gaussian shaped fluctuations satisfying stationarity and homogeneity, allowing us to evaluate, as a function of fluctuation-to-noise ratios, sizes of averaging time windows and ratios of the distance between the two measurement points to the true correlation length, the goodness of the two-point measurement for estimating the spatial correlation length. Analytic results are confirmed with numerically generated synthetic data and real experimental data obtained with the KSTAR beam emission spectroscopy diagnostic. Our results can be applied to Gaussian-shaped fluctuating signals where a correlation length must be measured with only two measurement points.

  18. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov-Galerkin method.

    Science.gov (United States)

    Doha, E H; Abd-Elhameed, W M; Youssri, Y H

    2015-09-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov-Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient.

  19. Dynamics of single photon transport in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system

    KAUST Repository

    Wang, Yuwen

    2016-09-22

    We study the dynamics of an ultrafast single photon pulse in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system. We find that for any single photon input the transmissivity depends periodically on the separation between the two coupling points. For a pulse containing many plane wave components it is almost impossible to suppress transmission, especially when the width of the pulse is less than 20 times the period. In contrast to plane wave input, the waveform of the pulse can be modified by controlling the coupling between the waveguide and Jaynes-Cummings system. Tailoring of the waveform is important for single photon manipulation in quantum informatics. © The Author(s) 2016.

  20. Reliability of the two-point measurement of the spatial correlation length from Gaussian-shaped fluctuating signals in fusion-grade plasmas

    Science.gov (United States)

    Kim, Jaewook; Nam, Y. U.; Lampert, M.; Ghim, Y.-C.

    2016-10-01

    A statistical method for the estimation of the spatial correlation lengths of Gaussian-shaped fluctuating signals with two measurement points is examined to quantitatively evaluate its reliability (variance) and accuracy (bias error). The standard deviation of the correlation value is analytically derived for randomly distributed Gaussian shaped fluctuations satisfying stationarity and homogeneity, allowing us to evaluate, as a function of fluctuation-to-noise ratios, the sizes of averaging time windows and the ratios of the distance between the two measurement points to the true correlation length, and the goodness of the two-point measurement for estimating the spatial correlation length. Analytic results are confirmed with numerically generated synthetic data and real experimental data obtained with the KSTAR beam emission spectroscopy diagnostic. Our results can be applied to Gaussian-shaped fluctuating signals where a correlation length must be measured with only two measurement points.

  1. Tensorial Orientation Scores

    NARCIS (Netherlands)

    van de Gronde, Jasper J.; Azzopardi, George; Petkov, Nicolai

    2015-01-01

    Orientation scores are representations of images built using filters that only select on orientation (and not on the magnitude of the frequency). Importantly, they allow (easy) reconstruction, making them ideal for use in a filtering pipeline. Traditionally a specific set of orientations has to be c

  2. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  3. Nursing activities score

    NARCIS (Netherlands)

    Miranda, DR; Nap, R; de Rijk, A; Schaufeli, W; Lapichino, G

    Objectives. The instruments used for measuring nursing workload in the intensive care unit (e.g., Therapeutic Intervention Scoring System-28) are based on therapeutic interventions related to severity of illness. Many nursing activities are not necessarily related to severity of illness, and

  4. Automated Essay Scoring

    Directory of Open Access Journals (Sweden)

    Semire DIKLI

    2006-01-01

    Full Text Available Automated Essay Scoring Semire DIKLI Florida State University Tallahassee, FL, USA ABSTRACT The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES has revealed that computers have the capacity to function as a more effective cognitive tool (Attali, 2004. AES is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003. Revision and feedback are essential aspects of the writing process. Students need to receive feedback in order to increase their writing quality. However, responding to student papers can be a burden for teachers. Particularly if they have large number of students and if they assign frequent writing assignments, providing individual feedback to student essays might be quite time consuming. AES systems can be very useful because they can provide the student with a score as well as feedback within seconds (Page, 2003. Four types of AES systems, which are widely used by testing companies, universities, and public schools: Project Essay Grader (PEG, Intelligent Essay Assessor (IEA, E-rater, and IntelliMetric. AES is a developing technology. Many AES systems are used to overcome time, cost, and generalizability issues in writing assessment. The accuracy and reliability of these systems have been proven to be high. The search for excellence in machine scoring of essays is continuing and numerous studies are being conducted to improve the effectiveness of the AES systems.

  5. Fetal Biophysical Profile Scoring

    Directory of Open Access Journals (Sweden)

    H.R. HaghighatKhah

    2009-01-01

    Full Text Available   "nFetal biophysical profile scoring is a sonographic-based method of fetal assessment first described by Manning and Platt in 1980. "nThe biophysical profile score was developed as a method to integrate real-time observations of the fetus and his/her intrauterine environment in order to more comprehensively assess the fetal condition. These findings must be evaluated in the context of maternal/fetal history (i.e., chronic hypertension, post-dates, intrauterine growth restriction, etc, fetal structural integrity (presence or absence of congenital anomalies, and the functionality of fetal support structures (placental and umbilical cord. For example, acute asphyxia due to placental abruption may result in an absence of the acute variables of the biophysical profile score (fetal breathing movements, fetal movement, fetal tone, and fetal heart rate reactivity with a normal amniotic fluid volume. With post maturity the asphyxial event may be intermittent and chronic resulting in a decrease in amniotic fluid volume, but with the acute variables remaining normal. "nWhile the 5 components of the biophysical profile score have remained unchanged since 1980 (Manning, 1980, the definitions of a normal and abnormal parameter have evolved with increasing experience. "nIn 1984 the definition of oligohydramnios was increased from < 1cm pocket of fluid to < 2.0 x 1.0 cm pocket. Oligohydramnios is now defined as a pocket of amniotic fluid < 2.0 x 2.0 cm (Manning, 1995a "nIf the four ultrasound variables are normal, the accuracy of the biophysical profile score was not found to be significantly improved by adding the non-stress test. As a result, in 1987 the profile score was modified to incorporate the non-stress test only when one of the ultrasound variables was abnormal (Manning 1987. Table 1 outlines the current definitions for quantifying a variable as present or absent. "nEach of the 5 components of the biophysical profile score does not have equal

  6. 利用端部效应改正的LS+AR模型进行日长变化预报%Prediction of LOD Change Based on the LS and AR Model with Edge Effect Corrected

    Institute of Scientific and Technical Information of China (English)

    刘建; 王琪洁; 张昊

    2013-01-01

    Aiming to resolve the edge effect in the process of predicting length of day (LOD) by the least squares and autoregressive (LS+AR) model,we employed a time series analysis model to extrapolate LOD series and produce a new series.Then,we used the new series to solve the coefficients for the LS model.At last,we used the LS+AR model to predict the LOD series again.By comparing the accuracy of LOD prediction by edge-effect corrected LS +AR and that by LS+AR,we conclude that edge-effect corrected LS+AR can improve the prediction accuracy,especially for medium-term and long-term predictions.%针对LS+AR模型在日长变化预报过程中存在的端部效应现象,采用时间序列分析方法对日长变化的序列进行外推,形成一个新的序列,用这个新序列求得LS模型的系数,然后再用LS+ AR模型对日长变化原始序列进行预报.实验结果表明,利用端部效应改正的LS+ AR模型与LS+ AR模型相比,在日长变化的预报精度上有一定的改善,尤其在跨度为中长期时改善更为明显.

  7. A genome-wide linkage study of individuals with high scores on NEO personality traits.

    Science.gov (United States)

    Amin, N; Schuur, M; Gusareva, E S; Isaacs, A; Aulchenko, Y S; Kirichenko, A V; Zorkoltseva, I V; Axenovich, T I; Oostra, B A; Janssens, A C J W; van Duijn, C M

    2012-10-01

    The NEO-Five-Factor Inventory divides human personality traits into five dimensions: neuroticism, extraversion, openness, conscientiousness and agreeableness. In this study, we sought to identify regions harboring genes with large effects on the five NEO personality traits by performing genome-wide linkage analysis of individuals scoring in the extremes of these traits (>90th percentile). Affected-only linkage analysis was performed using an Illumina 6K linkage array in a family-based study, the Erasmus Rucphen Family study. We subsequently determined whether distinct, segregating haplotypes found with linkage analysis were associated with the trait of interest in the population. Finally, a dense single-nucleotide polymorphism genotyping array (Illumina 318K) was used to search for copy number variations (CNVs) in the associated regions. In the families with extreme phenotype scores, we found significant evidence of linkage for conscientiousness to 20p13 (rs1434789, log of odds (LOD)=5.86) and suggestive evidence of linkage (LOD >2.8) for neuroticism to 19q, 21q and 22q, extraversion to 1p, 1q, 9p and12q, openness to 12q and 19q, and agreeableness to 2p, 6q, 17q and 21q. Further analysis determined haplotypes in 21q22 for neuroticism (P-values = 0.009, 0.007), in 17q24 for agreeableness (marginal P-value = 0.018) and in 20p13 for conscientiousness (marginal P-values = 0.058, 0.038) segregating in families with large contributions to the LOD scores. No evidence for CNVs in any of the associated regions was found. Our findings imply that there may be genes with relatively large effects involved in personality traits, which may be identified with next-generation sequencing techniques.

  8. Credit scoring for individuals

    Directory of Open Access Journals (Sweden)

    Maria DIMITRIU

    2010-12-01

    Full Text Available Lending money to different borrowers is profitable, but risky. The profits come from the interest rate and the fees earned on the loans. Banks do not want to make loans to borrowers who cannot repay them. Even if the banks do not intend to make bad loans, over time, some of them can become bad. For instance, as a result of the recent financial crisis, the capability of many borrowers to repay their loans were affected, many of them being on default. That’s why is important for the bank to monitor the loans. The purpose of this paper is to focus on credit scoring main issues. As a consequence of this, we presented in this paper the scoring model of an important Romanian Bank. Based on this credit scoring model and taking into account the last lending requirements of the National Bank of Romania, we developed an assessment tool, in Excel, for retail loans which is presented in the case study.

  9. Earthquake forecast enrichment scores

    Directory of Open Access Journals (Sweden)

    Christine Smyth

    2012-03-01

    Full Text Available The Collaboratory for the Study of Earthquake Predictability (CSEP is a global project aimed at testing earthquake forecast models in a fair environment. Various metrics are currently used to evaluate the submitted forecasts. However, the CSEP still lacks easily understandable metrics with which to rank the universal performance of the forecast models. In this research, we modify a well-known and respected metric from another statistical field, bioinformatics, to make it suitable for evaluating earthquake forecasts, such as those submitted to the CSEP initiative. The metric, originally called a gene-set enrichment score, is based on a Kolmogorov-Smirnov statistic. Our modified metric assesses if, over a certain time period, the forecast values at locations where earthquakes have occurred are significantly increased compared to the values for all locations where earthquakes did not occur. Permutation testing allows for a significance value to be placed upon the score. Unlike the metrics currently employed by the CSEP, the score places no assumption on the distribution of earthquake occurrence nor requires an arbitrary reference forecast. In this research, we apply the modified metric to simulated data and real forecast data to show it is a powerful and robust technique, capable of ranking competing earthquake forecasts.

  10. Non-perturbative aspects of Euclidean Yang-Mills theories in linear covariant gauges: Nielsen identities and a BRST invariant two-point correlation function

    CERN Document Server

    Capri, M A L; Pereira, A D; Fiorentini, D; Guimaraes, M S; Mintz, B W; Palhares, L F; Sorella, S P

    2016-01-01

    In order to construct a gauge invariant two-point function in a Yang-Mills theory, we propose the use of the all-order gauge invariant transverse configurations A^h. Such configurations can be obtained through the minimization of the functional A^2_{min} along the gauge orbit within the BRST invariant formulation of the Gribov-Zwanziger framework recently put forward in [1,2] for the class of the linear covariant gauges. This correlator turns out to provide a characterization of non-perturbative aspects of the theory in a BRST invariant and gauge parameter independent way. In particular, it turns out that the poles of are the same as those of the transverse part of the gluon propagator, which are also formally shown to be independent of the gauge parameter entering the gauge condition through the Nielsen identities. The latter follow from the new exact BRST invariant formulation introduced before. Moreover, the correlator enables us to attach a BRST invariant meaning to the possible positivity violation of ...

  11. A Parameter-Uniform Finite Difference Method for a Coupled System of Convection-Diffusion Two-Point Boundary Value Problems

    Institute of Scientific and Technical Information of China (English)

    Eugene O'Riordan; Jeanne Stynes; Martin Stynes

    2008-01-01

    A system of m (≥ 2) linear convection-diffusion two-point boundary value problems is examined, where the diffusion term in each equation is multiplied by a small parameter e and the equations are coupled through their convective and reactive terms via matrices B and A respectively. This system is in general singularly perturbed. Unlike the case of a single equation, it does not satisfy a conventional maximum princi-ple. Certain hypotheses are placed on the coupling matrices B and A that ensure exis-tence and uniqueness of a solution to the system and also permit boundary layers in the components of this solution at only one endpoint of the domain; these hypotheses can be regarded as a strong form of diagonal dominance of B. This solution is decomposed into a sum of regular and layer components. Bounds are established on these compo-nents and their derivatives to show explicitly their dependence on the small parameterε. Finally, numerical methods consisting of upwinding on piecewise-uniform Shishkin meshes are proved to yield numerical solutions that are essentially first-order conver-gent, uniformly in ε, to the true solution in the discrete maximum norm. Numerical results on Shishkin meshes are presented to support these theoretical bounds.

  12. Investigation of a 2D two-point maximum entropy regularization method for signal-to-noise ratio enhancement: application to CT polymer gel dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Matthews, Q [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Hilts, M [Medical Physics, BC Cancer Agency-Vancouver Island Centre, Victoria BC V8R 6V5 (Canada); Schulze, G [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Blades, M W [Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Turner, R F B [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver BC V6T 1Z4 (Canada)

    2006-05-21

    This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.

  13. Investigation of a 2D two-point maximum entropy regularization method for signal-to-noise ratio enhancement: application to CT polymer gel dosimetry.

    Science.gov (United States)

    Jirasek, A; Matthews, Q; Hilts, M; Schulze, G; Blades, M W; Turner, R F B

    2006-05-21

    This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.

  14. One- and two-point velocity distribution functions and velocity autocorrelation functions for various Reynolds numbers in decaying homogeneous isotropic turbulence

    Science.gov (United States)

    Hosokawa, Iwao

    2007-01-01

    A decaying homogeneous isotropic turbulence is treated on the combined bases of the Kolmogorov hypothesis and the cross-independence hypothesis (for a closure of the Monin-Lundgren (ML) hierarchy of many-point velocity distributions) in turbulence. Similarity solutions for one- and two-point velocity distributions are obtained in the viscous, inertial and large-scale ranges of separation distance, from which we can give a reasonable picture of longitudinal and transverse velocity autocorrelation functions for any Reynolds number, even though they are distant from exact solutions of the infinite ML hierarchy. Possibility of non-similarity solutions with other reasonable and more realistic features is unveiled within the same theoretical framework. The cross-independence hypothesis is proved to be inconsistent with the Kolmogorov [1941b. Dissipation of energy in locally isotropic turbulence. Dokl. Akad. Nauk SSSR 32, 16-18.] theory in the inertial range. This is the main factor by which our special strategy (described in Introduction) is taken for solving this problem.

  15. Van der Waals like behavior and equal area law of two point correlation function of f(R) AdS black holes

    CERN Document Server

    Mo, Jie-Xiong; Lin, Ze-Tao; Zeng, Xiao-Xiong

    2016-01-01

    To gain holographic insight into critical phenomena of $f(R)$ AdS black holes, we investigate their two point correlation function, which are dual to the geodesic length in the bulk. We solve the equation of motion constrained by the boundary condition numerically and probe both the effect of boundary region size and $f(R)$ gravity. Moreover, we introduce an analogous specific heat related to $\\delta L$. It is shown in the $T-\\delta L$ graph for the case $Q

  16. The International Bleeding Risk Score

    DEFF Research Database (Denmark)

    Laursen, Stig Borbjerg; Laine, L.; Dalton, H.

    2017-01-01

    The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding.......The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding....

  17. Revisiting van der Waals like behavior of f(R AdS black holes via the two point correlation function

    Directory of Open Access Journals (Sweden)

    Jie-Xiong Mo

    2017-05-01

    Full Text Available Van der Waals like behavior of f(R AdS black holes is revisited via two point correlation function, which is dual to the geodesic length in the bulk. The equation of motion constrained by the boundary condition is solved numerically and both the effect of boundary region size and f(R gravity are probed. Moreover, an analogous specific heat related to δL is introduced. It is shown that the T−δL graphs of f(R AdS black holes exhibit reverse van der Waals like behavior just as the T−S graphs do. Free energy analysis is carried out to determine the first order phase transition temperature T⁎ and the unstable branch in T−δL curve is removed by a bar T=T⁎. It is shown that the first order phase transition temperature is the same at least to the order of 10−10 for different choices of the parameter b although the values of free energy vary with b. Our result further supports the former finding that charged f(R AdS black holes behave much like RN-AdS black holes. We also check the analogous equal area law numerically and find that the relative errors for both the cases θ0=0.1 and θ0=0.2 are small enough. The fitting functions between log⁡|T−Tc| and log⁡|δL−δLc| for both cases are also obtained. It is shown that the slope is around 3, implying that the critical exponent is about 2/3. This result is in accordance with those in former literatures of specific heat related to the thermal entropy or entanglement entropy.

  18. Fingerprinting of music scores

    Science.gov (United States)

    Irons, Jonathan; Schmucker, Martin

    2004-06-01

    Publishers of sheet music are generally reluctant in distributing their content via the Internet. Although online sheet music distribution's advantages are numerous the potential risk of Intellectual Property Rights (IPR) infringement, e.g. illegal online distributions, disables any innovation propensity. While active protection techniques only deter external risk factors, additional technology is necessary to adequately treat further risk factors. For several media types including music scores watermarking technology has been developed, which ebeds information in data by suitable data modifications. Furthermore, fingerprinting or perceptual hasing methods have been developed and are being applied especially for audio. These methods allow the identification of content without prior modifications. In this article we motivate the development of watermarking and fingerprinting technologies for sheet music. Outgoing from potential limitations of watermarking methods we explain why fingerprinting methods are important for sheet music and address potential applications. Finally we introduce a condept for fingerprinting of sheet music.

  19. [Scoring--criteria for operability].

    Science.gov (United States)

    Oestern, H J

    1997-01-01

    For therapeutic recommendations three different kinds of scores are essential: 1. The severity scores for trauma; 2. Severity scores for mangled extremities; 3. Intensive care scores. The severity of polytrauma patients is measurable by the AIS, ISS, RTS, PTS and TRISS which is a combination of RTS, ISS, age, and mechanism of injury. For mangled extremities there are also different scores available: MESI (Mangled Extremity Syndrome Index) and MESS (Mangled Extremity Severity Score). The aim of these scores is to assist in the indication with regard to amputate or to save the extremity. These scoring indices can be used to evaluate the severity of a systemic inflammatory reaction syndrome with respect to multiple organ failure. All scores are dynamic values which are variable with improvement of therapy.

  20. Relationship of Apgar Scores and Bayley Mental and Motor Scores

    Science.gov (United States)

    Serunian, Sally A.; Broman, Sarah H.

    1975-01-01

    Examined the relationship of newborns' 1-minute Apgar scores to their 8-month Bayley mental and motor scores and to 8-month classifications of their development as normal, suspect, or abnormal. Also investigated relationships between Apgar scores and race, longevity, and birth weight. (JMB)

  1. Influence of the pre-ionization background and simulation of the optical emission of a streamer discharge in preheated air at atmospheric pressure between two point electrodes

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, A; Bonaventura, Z [Ecole Centrale Paris, EM2C Laboratory, UPR CNRS 288, Grande voie des vignes, 92295 Chatenay-Malabry Cedex (France); Celestin, S, E-mail: anne.bourdon@em2c.ecp.f [Communications and Space Sciences Laboratory, Department of Electrical Engineering, Pennsylvania State University, University Park, PA 16802 (United States)

    2010-06-15

    This paper presents simulations of positive and negative streamers propagating between two point electrodes in preheated air at atmospheric pressure. As many discharges have occurred before the simulated one, seed charges are taken into account in the interelectrode gap. First, for a pre-ionization background of 10{sup 9} cm{sup -3}, we have studied the influence of the data set used for transport parameters and reaction rates for air on the simulation results. We have compared results obtained in 1997 using input parameters from Morrow and Lowke and from Kulikovsky. Deviations as large as 20% of streamer characteristics (i.e. electric field in the streamer head and body, streamer velocity, streamer radius, streamer electron density) have been observed for this point-to-point configuration. Second, we have studied the influence of the pulsed voltage frequency on the discharge structure. For the studied discharge regime, a change in the applied voltage frequency corresponds to a change in the pre-ionization background. In this work, we have considered a wide range of pre-ionization values from 10{sup 4} and up to 10{sup 9} cm{sup -3}. We have noted that the value of the pre-ionization background has a small influence on the electron density, electric field and location of the negative streamer head. Conversely, it has a significant influence on the positive streamer characteristics. Finally, we have compared instantaneous and time-averaged optical emissions of the three band systems of N{sub 2} and N{sub 2}{sup +} (1PN{sub 2}, 2PN{sub 2} and 1NN{sub 2}{sup +}) during the discharge propagation. We have shown that the emission of the 2PN{sub 2} is the strongest of the three bands, in agreement with experimental observations. It is interesting to note that even with a short time averaging of a few nanoseconds, which corresponds to currently used instruments, the structure of the time-averaged emission of the 2PN{sub 2} is different from the instantaneous one and shows

  2. "Two-point" assembling of Zn(II) and Co(II) metalloporphyrins derivatized with a crown ether substituent in Langmuir and Langmuir-Blodgett films.

    Science.gov (United States)

    Noworyta, Krzysztof; Marczak, Renata; Tylenda, Rafal; Sobczak, Janusz W; Chitta, Raghu; Kutner, Wlodzimierz; D'Souza, Francis

    2007-02-27

    The effect of "two-point" interactions of Zn(II) and Co(II) metalloporphyrins, bearing 15-crown-5 ether peripheral substituents, on their assembling in Langmuir and Langmuir-Blodgett (LB) films was investigated. That is, simultaneously, the central metal ion of the porphyrin was axially ligated by a nitrogen-containing ligand in the emerged part of the Langmuir film on one hand, and a suitably selected cation pertaining in the subphase solution was supramolecularly complexed by the crown ether moiety in the submerged part of the film on the other. The compression and polarity properties of the Langmuir films of the derivatized free-base 5,10,15-triphenyl-20-(benzo-15-crown-5)porphyrin, H2(TPMCP), and the corresponding cobalt(II) and zinc(II) metalloporphyrins, denoted as Co(TPMCP) and Zn(TPCMP), respectively, as well as inclusion complexes of the metalloporphyrins with selected cations were investigated. For the axial ligation of Zn(II) and Co(II), pyrazine (pyz) and 4,4'-bipyridnine (bpy) aromatic as well as piperazine (ppz) and 1,4-diazabicyclo[2.2.2]octane (DABCO) cyclic heteroaliphatic ligands were selected. The films were formed on the water subphase solution in the absence and presence of LiCl, NaCl, or NH4Cl. The Langmuir films were built of monolayer J-type aggregates of tilted porphyrin macrocycles. The porphyrins formed rather labile complexes with the cations in the subphase. Nevertheless, the XPS analysis revealed that these cations were LB transferred together with the porphyrins onto solid substrates. In the Co(TPMCP) Langmuir films formed on the water subphases, Co(II) was complexed by aromatic but not cyclic heteroaliphatic ligands, while, in these films formed on the NaCl subphase solutions, the metalloporphyrin was also complexed by DABCO. In Langmuir films spread on alkaline subphase solutions, both aromatic and heteroaliphatic ligands formed complexes with Co(TPMCP) of different stoichiometries. The X-ray reflectivity and GIXD measurements

  3. Credit Scoring Modeling

    Directory of Open Access Journals (Sweden)

    Siana Halim

    2014-01-01

    Full Text Available It is generally easier to predict defaults accurately if a large data set (including defaults is available for estimating the prediction model. This puts not only small banks, which tend to have smaller data sets, at disadvantage. It can also pose a problem for large banks that began to collect their own historical data only recently, or banks that recently introduced a new rating system. We used a Bayesian methodology that enables banks with small data sets to improve their default probability. Another advantage of the Bayesian method is that it provides a natural way for dealing with structural differences between a bank’s internal data and additional, external data. In practice, the true scoring function may differ across the data sets, the small internal data set may contain information that is missing in the larger external data set, or the variables in the two data sets are not exactly the same but related. Bayesian method can handle such kind of problem.

  4. A Prognostic Scoring Tool for Cesarean Organ/Space Surgical Site Infections: Derivation and Internal Validation.

    Science.gov (United States)

    Assawapalanggool, Srisuda; Kasatpibal, Nongyao; Sirichotiyakul, Supatra; Arora, Rajin; Suntornlimsiri, Watcharin

    Organ/space surgical site infections (SSIs) are serious complications after cesarean delivery. However, no scoring tool to predict these complications has yet been developed. This study sought to develop and validate a prognostic scoring tool for cesarean organ/space SSIs. Data for case and non-case of cesarean organ/space SSI between January 1, 2007 and December 31, 2012 from a tertiary care hospital in Thailand were analyzed. Stepwise multivariable logistic regression was used to select the best predictor combination and their coefficients were transformed to a risk scoring tool. The likelihood ratio of positive for each risk category and the area under receiver operating characteristic (AUROC) curves were analyzed on total scores. Internal validation using bootstrap re-sampling was tested for reproducibility. The predictors of 243 organ/space SSIs from 4,988 eligible cesarean delivery cases comprised the presence of foul-smelling amniotic fluid (four points), vaginal examination five or more times before incision (two points), wound class III or greater (two points), being referred from local setting (two points), hemoglobin less than 11 g/dL (one point), and ethnic minorities (one point). The likelihood ratio of cesarean organ/space SSIs with 95% confidence interval among low (total score of 0-1 point), medium (total score of 2-5 points), and high risk (total score of ≥6 points) categories were 0.11 (0.07-0.19), 1.03 (0.89-1.18), and 13.25 (10.87-16.14), respectively. Both AUROCs of the derivation and validation data were comparable (87.57% versus 86.08%; p = 0.418). This scoring tool showed a high predictive ability regarding cesarean organ/space SSIs on the derivation data and reproducibility was demonstrated on internal validation. It could assist practitioners prioritize patient care and management depending on risk category and decrease SSI rates in cesarean deliveries.

  5. Developmental Sentence Scoring for Japanese

    Science.gov (United States)

    Miyata, Susanne; MacWhinney, Brian; Otomo, Kiyoshi; Sirai, Hidetosi; Oshima-Takane, Yuriko; Hirakawa, Makiko; Shirai, Yasuhiro; Sugiura, Masatoshi; Itoh, Keiko

    2013-01-01

    This article reports on the development and use of the Developmental Sentence Scoring for Japanese (DSSJ), a new morpho-syntactical measure for Japanese constructed after the model of Lee's English Developmental Sentence Scoring model. Using this measure, the authors calculated DSSJ scores for 84 children divided into six age groups between 2;8…

  6. Do Test Scores Buy Happiness?

    Science.gov (United States)

    McCluskey, Neal

    2017-01-01

    Since at least the enactment of No Child Left Behind in 2002, standardized test scores have served as the primary measures of public school effectiveness. Yet, such scores fail to measure the ultimate goal of education: maximizing happiness. This exploratory analysis assesses nation level associations between test scores and happiness, controlling…

  7. Line Lengths and Starch Scores.

    Science.gov (United States)

    Moriarty, Sandra E.

    1986-01-01

    Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)

  8. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  9. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  10. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is c

  11. Classification of current scoring functions.

    Science.gov (United States)

    Liu, Jie; Wang, Renxiao

    2015-03-23

    Scoring functions are a class of computational methods widely applied in structure-based drug design for evaluating protein-ligand interactions. Dozens of scoring functions have been published since the early 1990s. In literature, scoring functions are typically classified as force-field-based, empirical, and knowledge-based. This classification scheme has been quoted for more than a decade and is still repeatedly quoted by some recent publications. Unfortunately, it does not reflect the recent progress in this field. Besides, the naming convention used for describing different types of scoring functions has been somewhat jumbled in literature, which could be confusing for newcomers to this field. Here, we express our viewpoint on an up-to-date classification scheme and appropriate naming convention for current scoring functions. We propose that they can be classified into physics-based methods, empirical scoring functions, knowledge-based potentials, and descriptor-based scoring functions. We also outline the major difference and connections between different categories of scoring functions.

  12. The Machine Scoring of Writing

    Science.gov (United States)

    McCurry, Doug

    2010-01-01

    This article provides an introduction to the kind of computer software that is used to score student writing in some high stakes testing programs, and that is being promoted as a teaching and learning tool to schools. It sketches the state of play with machines for the scoring of writing, and describes how these machines work and what they do.…

  13. Skyrocketing Scores: An Urban Legend

    Science.gov (United States)

    Krashen, Stephen

    2005-01-01

    A new urban legend claims, "As a result of the state dropping bilingual education, test scores in California skyrocketed." Krashen disputes this theory, pointing out that other factors offer more logical explanations of California's recent improvements in SAT-9 scores. He discusses research on the effects of California's Proposition 227,…

  14. Quadratic prediction of factor scores

    NARCIS (Netherlands)

    Wansbeek, T

    1999-01-01

    Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. II is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic

  15. Trends in Classroom Observation Scores

    Science.gov (United States)

    Casabianca, Jodi M.; Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Observations and ratings of classroom teaching and interactions collected over time are susceptible to trends in both the quality of instruction and rater behavior. These trends have potential implications for inferences about teaching and for study design. We use scores on the Classroom Assessment Scoring System-Secondary (CLASS-S) protocol from…

  16. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost.

  17. Obstetrical disseminated intravascular coagulation score.

    Science.gov (United States)

    Kobayashi, Takao

    2014-06-01

    Obstetrical disseminated intravascular coagulation (DIC) is usually a very acute, serious complication of pregnancy. The obstetrical DIC score helps with making a prompt diagnosis and starting treatment early. This DIC score, in which higher scores are given for clinical parameters rather than for laboratory parameters, has three components: (i) the underlying diseases; (ii) the clinical symptoms; and (iii) the laboratory findings (coagulation tests). It is justifiably appropriate to initiate therapy for DIC when the obstetrical DIC score reaches 8 points or more before obtaining the results of coagulation tests. Improvement of blood coagulation tests and clinical symptoms are essential to the efficacy evaluation for treatment after a diagnosis of obstetrical DIC. Therefore, the efficacy evaluation criteria for obstetrical DIC are also defined to enable follow-up of the clinical efficacy of DIC therapy.

  18. What Is the Apgar Score?

    Science.gov (United States)

    ... Development Infections Diseases & Conditions Pregnancy & Baby Nutrition & Fitness Emotions & Behavior School & Family Life First Aid & Safety Doctors & ... 2 being the best score: A ppearance (skin color) P ulse (heart rate) G rimace response (reflexes) ...

  19. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  20. Commercial Building Energy Asset Score

    Energy Technology Data Exchange (ETDEWEB)

    2017-05-26

    This software (Asset Scoring Tool) is designed to help building owners and managers to gain insight into the as-built efficiency of their buildings. It is a web tool where users can enter their building information and obtain an asset score report. The asset score report consists of modeled building energy use (by end use and by fuel type), building systems (envelope, lighting, heating, cooling, service hot water) evaluations, and recommended energy efficiency measures. The intended users are building owners and operators who have limited knowledge of building energy efficiency. The scoring tool collects minimum building data (~20 data entries) from users and build a full-scale energy model using the inference functionalities from Facility Energy Decision System (FEDS). The scoring tool runs real-time building energy simulation using EnergyPlus and performs life-cycle cost analysis using FEDS. An API is also under development to allow the third-party applications to exchange data with the web service of the scoring tool.

  1. Skin scoring in systemic sclerosis

    DEFF Research Database (Denmark)

    Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars

    1994-01-01

    Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...

  2. Skin scoring in systemic sclerosis

    DEFF Research Database (Denmark)

    Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars

    1994-01-01

    Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...

  3. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  4. Re-Scoring the Game’s Score

    DEFF Research Database (Denmark)

    Gasselseder, Hans-Peter

    2014-01-01

    This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self-report questionnai......This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self...... that a compatible integration of global and local goals in the ludonarrative contributes to a motivational-emotional reinforcement that can be gained through musical feedback. Shedding light on the implications of music dramaturgy within a semantic ecology paradigm, the perception of varying relational attributes...

  5. Estimating Decision Indices Based on Composite Scores

    Science.gov (United States)

    Knupp, Tawnya Lee

    2009-01-01

    The purpose of this study was to develop an IRT model that would enable the estimation of decision indices based on composite scores. The composite scores, defined as a combination of unidimensional test scores, were either a total raw score or an average scale score. Additionally, estimation methods for the normal and compound multinomial models…

  6. Feasibility Study for Acquiring Sa(z) Value of Background Aerosol Using Two-Point Calibration Method%两点标定法求解背景气溶胶Sa(z)值的可行性研究

    Institute of Scientific and Technical Information of China (English)

    刘厚通; 赵建新; 莫须涛; 韩玉峰

    2013-01-01

    对两点标定法求解气溶胶消光后向散射比Sa(z)的可行性进行了研究.讨论了利用两点标定法反演气溶胶消光后向散射比的精度和两标定点之间距离的关系:当两点之间的距离大于1 km时,利用两点标定法得到的气溶胶消光后向散射比是可靠的.对比分析了两标定点间的距离大小及两种不同的迭代方法对两标定点之间气溶胶消光后向散射比反演结果的影响.并对两点标定法用于气溶胶消光后向散射比反演时,预先对反演结果进行误差估计的可行性进行了探讨.实际反演结果表明,利用两点标定法进行气溶胶消光后向散射比的反演,当两标定点之间的距离大于1 km且标定点的气溶胶消光系数的误差为5%时,反演得到两标定点之间气溶胶消光后向散射比的误差一般不会超过6%.%Feasibility of using two-point calibration method to acquire aerosol extinction-to-backscattering ratio is explored. Two-point calibration method is one of the important methods for acquiring aerosol extinction-to-backscattering ratio. The relationship between the distance of the two calibration points and the accuracy of aerosol extinction-to-backscattering ratio acquired by using two-point calibration method is discussed. The aerosol extinction-to-backscattering ratio obtained by using two-point calibration method is reliable when the distance between the two points is greater than 1 km. The influence of the distance between the two calibration points on the inversion results of aerosol extinction-to-backscattering ratio from one calibration point to another one is analyzed. The influence to aerosol extinction-to-backscattering ratio inversion results of two different iterative methods is discussed. The feasibility of beforehand error estimation of inversion results when two-point calibration method is used for the inversion of aerosol extinction-to-backscattering ratio is investigated. The inversion

  7. Genetic effect on apgar score

    Directory of Open Access Journals (Sweden)

    Carla Franchi-Pinto

    1999-03-01

    Full Text Available Intraclass correlation coefficients for one- and five-min Apgar scores of 604 twin pairs born at a southeastern Brazilian hospital were calculated, after adjusting these scores for gestational age and sex. The data support a genetic hypothesis only for 1-min Apgar score, probably because it is less affected by the environment than 4 min later, after the newborns have been under the care of a neonatology team. First-born twins exhibited, on average, better clinical conditions than second-born twins. The former showed a significantly lower proportion of Apgar scores under seven than second-born twins, both at 1 min (17.5% vs. 29.8% and at 5 min (7.2% vs. 11.9%. The proportion of children born with "good" Apgar scores was significantly smaller among twins than among 1,522 singletons born at the same hospital. Among the latter, 1- and 5-min Apgar scores under seven were exhibited by 9.2% and 3.4% newborns, respectively.Os coeficientes de correlação intraclasse foram calculados para os índices de Apgar 1 e 5 minutos após o nascimento de 604 pares de gêmeos em uma maternidade do sudeste brasileiro, depois que esses índices foram ajustados para idade gestacional e sexo. Os dados obtidos apoiaram a hipótese genética apenas em relação ao primeiro índice de Apgar, provavelmente porque ele é menos influenciado pelo ambiente do que 4 minutos depois, quando os recém-nascidos já estiveram sob os cuidados de uma equipe de neonatologistas. Os gêmeos nascidos em primeiro lugar apresentaram, em média, melhor estado clínico que os nascidos em segundo lugar, visto que os primeiros mostraram uma proporção de índices de Apgar inferiores a 7 significativamente menor do que os nascidos em segundo lugar, tanto um minuto (17,5% contra 29,8% quanto cinco minutos após o nascimento (7,2% contra 11,9%. A proporção de recém-nascidos com índices de Apgar que indicam bom prognóstico foi significativamente menor nos gêmeos do que em 1.522 conceptos

  8. Interpreting force concept inventory scores: Normalized gain and SAT scores

    Directory of Open Access Journals (Sweden)

    Vincent P. Coletta

    2007-05-01

    Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292, and strong, positive correlations were found for both populations (r=0.57 and r=0.46, respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.

  9. Interpreting force concept inventory scores: Normalized gain and SAT scores

    Directory of Open Access Journals (Sweden)

    Jeffrey J. Steinert

    2007-05-01

    Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292 , and strong, positive correlations were found for both populations ( r=0.57 and r=0.46 , respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.

  10. Bias Adjusted Precipitation Threat Scores

    Directory of Open Access Journals (Sweden)

    F. Mesinger

    2008-04-01

    Full Text Available Among the wide variety of performance measures available for the assessment of skill of deterministic precipitation forecasts, the equitable threat score (ETS might well be the one used most frequently. It is typically used in conjunction with the bias score. However, apart from its mathematical definition the meaning of the ETS is not clear. It has been pointed out (Mason, 1989; Hamill, 1999 that forecasts with a larger bias tend to have a higher ETS. Even so, the present author has not seen this having been accounted for in any of numerous papers that in recent years have used the ETS along with bias "as a measure of forecast accuracy".

    A method to adjust the threat score (TS or the ETS so as to arrive at their values that correspond to unit bias in order to show the model's or forecaster's accuracy in extit{placing} precipitation has been proposed earlier by the present author (Mesinger and Brill, the so-called dH/dF method. A serious deficiency however has since been noted with the dH/dF method in that the hypothetical function that it arrives at to interpolate or extrapolate the observed value of hits to unit bias can have values of hits greater than forecast when the forecast area tends to zero. Another method is proposed here based on the assumption that the increase in hits per unit increase in false alarms is proportional to the yet unhit area. This new method removes the deficiency of the dH/dF method. Examples of its performance for 12 months of forecasts by three NCEP operational models are given.

  11. The HEART score for chest pain patients

    NARCIS (Netherlands)

    Backus, B.E.

    2012-01-01

    The HEART score was developed to improve risk stratification in chest pain patients in the emergency department (ED). This thesis describes series of validation studies of the HEART score and sub studies for individual elements of the score. The predictive value of the HEART score for the occurrence

  12. Scoring and Standard Setting with Standardized Patients.

    Science.gov (United States)

    Norcini, John J.; And Others

    1993-01-01

    The continuous method of scoring a performance test composed of standardized patients was compared with a derivative method that assigned each of the 131 examinees (medical residents) a dichotomous score, and use of Angoff's method with these scoring methods was studied. Both methods produce reasonable means and distributions of scores. (SLD)

  13. Score lists in multipartite hypertournaments

    CERN Document Server

    Pirzada, Shariefuddin; Iványi, Antal

    2010-01-01

    Given non-negative integers $n_{i}$ and $\\alpha_{i}$ with $0 \\leq \\alpha_{i} \\leq n_i$ $(i=1,2,...,k)$, an $[\\alpha_{1},\\alpha_{2},...,\\alpha_{k}]$-$k$-partite hypertournament on $\\sum_{1}^{k}n_{i}$ vertices is a $(k+1)$-tuple $(U_{1},U_{2},...,U_{k},E)$, where $U_{i}$ are $k$ vertex sets with $|U_{i}|=n_{i}$, and $E$ is a set of $\\sum_{1}^{k}\\alpha_{i}$-tuples of vertices, called arcs, with exactly $\\alpha_{i}$ vertices from $U_{i}$, such that any $\\sum_{1}^{k}\\alpha_{i}$ subset $\\cup_{1}^{k}U_{i}^{\\prime}$ of $\\cup_{1}^{k}U_{i}$, $E$ contains exactly one of the $(\\sum_{1}^{k} \\alpha_{i})!$ $\\sum_{1}^{k}\\alpha_{i}$-tuples whose entries belong to $\\cup_{1}^{k}U_{i}^{\\prime}$. We obtain necessary and sufficient conditions for $k$ lists of non-negative integers in non-decreasing order to be the losing score lists and to be the score lists of some $k$-partite hypertournament.

  14. Disclosure Risk from Factor Scores

    Directory of Open Access Journals (Sweden)

    Drechsler Jörg

    2014-03-01

    Full Text Available Remote access can be a powerful tool for providing data access for external researchers. Since the microdata never leave the secure environment of the data-providing agency, alterations of the microdata can be kept to a minimum. Nevertheless, remote access is not free from risk. Many statistical analyses that do not seem to provide disclosive information at first sight can be used by sophisticated intruders to reveal sensitive information. For this reason the list of allowed queries is usually restricted in a remote setting. However, it is not always easy to identify problematic queries. We therefore strongly support the argument that has been made by other authors: that all queries should be monitored carefully and that any microlevel information should always be withheld. As an illustrative example, we use factor score analysis, for which the output of interest - the factor loading of the variables - seems to be unproblematic. However, as we show in the article, the individual factor scores that are usually returned as part of the output can be used to reveal sensitive information. Our empirical evaluations based on a German establishment survey emphasize that this risk is far from a purely theoretical problem.

  15. The Hamilton-Jacobi theory for solving two-point boundary value problems: Theory and numerics with application to spacecraft formation flight, optimal control and the study of phase space structure

    Science.gov (United States)

    Guibout, Vincent M.

    This dissertation has been motivated by the need for new methods to address complex problems that arise in spacecraft formation design. As a direct result of this motivation, a general methodology for solving two-point boundary value problems for Hamiltonian systems has been found. Using the Hamilton-Jacobi theory in conjunction with the canonical transformation induced by the phase flow, it is shown that generating functions solve two-point boundary value problems. Traditional techniques for addressing these problems are iterative and require an initial guess. The method presented in this dissertation solves boundary value problems at the cost of a single function evaluation, although it requires knowledge of at least one generating function. Properties of this method are presented. Specifically, we show that it includes perturbation theory and generalizes it to nonlinear systems. Most importantly, it predicts the existence of multiple solutions and allows one to recover all of these solutions. To demonstrate the efficiency of this approach, an algorithm for computing the generating functions is proposed and its convergence properties are studied. As the method developed in this work is based on the Hamiltonian structure of the problem, particular attention must be paid to the numerics of the algorithm. To address this, a general framework for studying the discretization of certain dynamical systems is developed. This framework generalizes earlier work on discretization of Lagrangian and Hamiltonian systems on tangent and cotangent bundles respectively. In addition, it provides new insights into some symplectic integrators and leads to a new discrete Hamilton-Jacobi theory. Most importantly, it allows one to discretize optimal control problems. In particular, a discrete maximum principle is presented. This dissertation also investigates applications of the proposed method to solve two-point boundary value problems. In particular, new techniques for designing

  16. Cooperative catalysis of metal and O-H···O/sp3-C-H···O two-point hydrogen bonds in alcoholic solvents: Cu-catalyzed enantioselective direct alkynylation of aldehydes with terminal alkynes.

    Science.gov (United States)

    Ishii, Takaoki; Watanabe, Ryo; Moriya, Toshimitsu; Ohmiya, Hirohisa; Mori, Seiji; Sawamura, Masaya

    2013-09-27

    Catalyst-substrate hydrogen bonds in artificial catalysts usually occur in aprotic solvents, but not in protic solvents, in contrast to enzymatic catalysis. We report a case in which ligand-substrate hydrogen-bonding interactions cooperate with a transition-metal center in alcoholic solvents for enantioselective catalysis. Copper(I) complexes with prolinol-based hydroxy amino phosphane chiral ligands catalytically promoted the direct alkynylation of aldehydes with terminal alkynes in alcoholic solvents to afford nonracemic secondary propargylic alcohols with high enantioselectivities. Quantum-mechanical calculations of enantiodiscriminating transition states show the occurrence of a nonclassical sp(3)-C-H···O hydrogen bond as a secondary interaction between the ligand and substrate, which results in highly directional catalyst-substrate two-point hydrogen bonding.

  17. Predicting Retear after Repair of Full-Thickness Rotator Cuff Tear: Two-Point Dixon MR Imaging Quantification of Fatty Muscle Degeneration-Initial Experience with 1-year Follow-up.

    Science.gov (United States)

    Nozaki, Taiki; Tasaki, Atsushi; Horiuchi, Saya; Ochi, Junko; Starkey, Jay; Hara, Takeshi; Saida, Yukihisa; Yoshioka, Hiroshi

    2016-08-01

    Purpose To determine the degree of preoperative fatty degeneration within muscles, postoperative longitudinal changes in fatty degeneration, and differences in fatty degeneration between patients with full-thickness supraspinatus tears who do and those who do not experience a retear after surgery. Materials and Methods This prospective study had institutional review board approval and was conducted in accordance with the Committee for Human Research. Informed consent was obtained. Fifty patients with full-thickness supraspinatus tears (18 men, 32 women; mean age, 67.0 years ± 8.0; age range, 41-91 years) were recruited. The degrees of preoperative and postoperative fatty degeneration were quantified by using a two-point Dixon magnetic resonance (MR) imaging sequence; two radiologists measured the mean signal intensity on in-phase [S(In)] and fat [S(Fat)] images. Estimates of fatty degeneration were calculated with "fat fraction" values by using the formula S(Fat)/S(In) within the supraspinatus, infraspinatus, and subscapularis muscles at baseline preoperative and at postoperative 1-year follow-up MR imaging. Preoperative fat fractions in the failed-repair group and the intact-repair group were compared by using the Mann-Whitney U test. Results The preoperative fat fractions in the supraspinatus muscle were significantly higher in the failed-repair group than in the intact-repair group (37.0% vs 19.5%, P muscle tended to progress at 1 year postoperatively in only the failed-repair group. Conclusion MR imaging quantification of preoperative fat fractions by using a two-point Dixon sequence within the rotator cuff muscles may be a viable method for predicting postoperative retear. (©) RSNA, 2016.

  18. Cardiovascular risk scores for coronary atherosclerosis.

    Science.gov (United States)

    Yalcin, Murat; Kardesoglu, Ejder; Aparci, Mustafa; Isilak, Zafer; Uz, Omer; Yiginer, Omer; Ozmen, Namik; Cingozbay, Bekir Yilmaz; Uzun, Mehmet; Cebeci, Bekir Sitki

    2012-10-01

    The objective of this study was to compare frequently used cardiovascular risk scores in predicting the presence of coronary artery disease (CAD) and 3-vessel disease. In 350 consecutive patients (218 men and 132 women) who underwent coronary angiography, the cardiovascular risk level was determined using the Framingham Risk Score (FRS), the Modified Framingham Risk Score (MFRS), the Prospective Cardiovascular Münster (PROCAM) score, and the Systematic Coronary Risk Evaluation (SCORE). The area under the curve for receiver operating characteristic curves showed that FRS had more predictive value than the other scores for CAD (area under curve, 0.76, P MFRS, PROCAM, and SCORE) may predict the presence and severity of coronary atherosclerosis.The FRS had better predictive value than the other scores.

  19. An ultrasound score for knee osteoarthritis

    DEFF Research Database (Denmark)

    Riecke, B F; Christensen, R.; Torp-Pedersen, S

    2014-01-01

    OBJECTIVE: To develop standardized musculoskeletal ultrasound (MUS) procedures and scoring for detecting knee osteoarthritis (OA) and test the MUS score's ability to discern various degrees of knee OA, in comparison with plain radiography and the 'Knee injury and Osteoarthritis Outcome Score' (KO...

  20. Breaking of scored tablets : a review

    NARCIS (Netherlands)

    van Santen, E; Barends, D M; Frijlink, H W

    2002-01-01

    The literature was reviewed regarding advantages, problems and performance indicators of score lines. Scored tablets provide dose flexibility, ease of swallowing and may reduce the costs of medication. However, many patients are confronted with scored tablets that are broken unequally and with diffi

  1. Developing Score Reports for Cognitive Diagnostic Assessments

    Science.gov (United States)

    Roberts, Mary Roduta; Gierl, Mark J.

    2010-01-01

    This paper presents a framework to provide a structured approach for developing score reports for cognitive diagnostic assessments ("CDAs"). Guidelines for reporting and presenting diagnostic scores are based on a review of current educational test score reporting practices and literature from the area of information design. A sample diagnostic…

  2. Credit Scores, Race, and Residential Sorting

    Science.gov (United States)

    Nelson, Ashlyn Aiko

    2010-01-01

    Credit scores have a profound impact on home purchasing power and mortgage pricing, yet little is known about how credit scores influence households' residential location decisions. This study estimates the effects of credit scores on residential sorting behavior using a novel mortgage industry data set combining household demographic, credit, and…

  3. Credit Scores, Race, and Residential Sorting

    Science.gov (United States)

    Nelson, Ashlyn Aiko

    2010-01-01

    Credit scores have a profound impact on home purchasing power and mortgage pricing, yet little is known about how credit scores influence households' residential location decisions. This study estimates the effects of credit scores on residential sorting behavior using a novel mortgage industry data set combining household demographic, credit, and…

  4. Semiparametric score sevel susion: Gaussian sopula approach

    NARCIS (Netherlands)

    Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, R.N.J.; Spreeuwers, L.J.

    2015-01-01

    Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the opti

  5. An objective fluctuation score for Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Malcolm K Horne

    Full Text Available Establishing the presence and severity of fluctuations is important in managing Parkinson's Disease yet there is no reliable, objective means of doing this. In this study we have evaluated a Fluctuation Score derived from variations in dyskinesia and bradykinesia scores produced by an accelerometry based system.The Fluctuation Score was produced by summing the interquartile range of bradykinesia scores and dyskinesia scores produced every 2 minutes between 0900-1800 for at least 6 days by the accelerometry based system and expressing it as an algorithm.This Score could distinguish between fluctuating and non-fluctuating patients with high sensitivity and selectivity and was significant lower following activation of deep brain stimulators. The scores following deep brain stimulation lay in a band just above the score separating fluctuators from non-fluctuators, suggesting a range representing adequate motor control. When compared with control subjects the score of newly diagnosed patients show a loss of fluctuation with onset of PD. The score was calculated in subjects whose duration of disease was known and this showed that newly diagnosed patients soon develop higher scores which either fall under or within the range representing adequate motor control or instead go on to develop more severe fluctuations.The Fluctuation Score described here promises to be a useful tool for identifying patients whose fluctuations are progressing and may require therapeutic changes. It also shows promise as a useful research tool. Further studies are required to more accurately identify therapeutic targets and ranges.

  6. Committee Opinion No. 644: The Apgar Score.

    Science.gov (United States)

    2015-10-01

    The Apgar score provides an accepted and convenient method for reporting the status of the newborn infant immediately after birth and the response to resuscitation if needed. The Apgar score alone cannot be considered to be evidence of or a consequence of asphyxia, does not predict individual neonatal mortality or neurologic outcome, and should not be used for that purpose. An Apgar score assigned during a resuscitation is not equivalent to a score assigned to a spontaneously breathing infant. The American Academy of Pediatrics and the American College of Obstetricians and Gynecologists encourage use of an expanded Apgar score reporting form that accounts for concurrent resuscitative interventions.

  7. Conditional Reliability Coefficients for Test Scores.

    Science.gov (United States)

    Nicewander, W Alan

    2017-04-06

    The most widely used, general index of measurement precision for psychological and educational test scores is the reliability coefficient-a ratio of true variance for a test score to the true-plus-error variance of the score. In item response theory (IRT) models for test scores, the information function is the central, conditional index of measurement precision. In this inquiry, conditional reliability coefficients for a variety of score types are derived as simple transformations of information functions. It is shown, for example, that the conditional reliability coefficient for an ordinary, number-correct score, X, is equal to, ρ(X,X'|θ)=I(X,θ)/[I(X,θ)+1] Where: θ is a latent variable measured by an observed test score, X; p(X, X'|θ) is the conditional reliability of X at a fixed value of θ; and I(X, θ) is the score information function. This is a surprisingly simple relationship between the 2, basic indices of measurement precision from IRT and classical test theory (CTT). This relationship holds for item scores as well as test scores based on sums of item scores-and it holds for dichotomous as well as polytomous items, or a mix of both item types. Also, conditional reliabilities are derived for computerized adaptive test scores, and for θ-estimates used as alternatives to number correct scores. These conditional reliabilities are all related to information in a manner similar-or-identical to the 1 given above for the number-correct (NC) score. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Forecasting the value of credit scoring

    Science.gov (United States)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-08-01

    Nowadays, credit scoring system plays an important role in banking sector. This process is important in assessing the creditworthiness of customers requesting credit from banks or other financial institutions. Usually, the credit scoring is used when customers send the application for credit facilities. Based on the score from credit scoring, bank will be able to segregate the "good" clients from "bad" clients. However, in most cases the score is useful at that specific time only and cannot be used to forecast the credit worthiness of the same applicant after that. Hence, bank will not know if "good" clients will always be good all the time or "bad" clients may become "good" clients after certain time. To fill up the gap, this study proposes an equation to forecast the credit scoring of the potential borrowers at a certain time by using the historical score related to the assumption. The Mean Absolute Percentage Error (MAPE) is used to measure the accuracy of the forecast scoring. Result shows the forecast scoring is highly accurate as compared to actual credit scoring.

  9. The Mystery of the Z-Score.

    Science.gov (United States)

    Curtis, Alexander E; Smith, Tanya A; Ziganshin, Bulat A; Elefteriades, John A

    2016-08-01

    Reliable methods for measuring the thoracic aorta are critical for determining treatment strategies in aneurysmal disease. Z-scores are a pragmatic alternative to raw diameter sizes commonly used in adult medicine. They are particularly valuable in the pediatric population, who undergo rapid changes in physical development. The advantage of the Z-score is its inclusion of body surface area (BSA) in determining whether an aorta is within normal size limits. Therefore, Z-scores allow us to determine whether true pathology exists, which can be challenging in growing children. In addition, Z-scores allow for thoughtful interpretation of aortic size in different genders, ethnicities, and geographical regions. Despite the advantages of using Z-scores, there are limitations. These include intra- and inter-observer bias, measurement error, and variations between alternative Z-score nomograms and BSA equations. Furthermore, it is unclear how Z-scores change in the normal population over time, which is essential when interpreting serial values. Guidelines for measuring aortic parameters have been developed by the American Society of Echocardiography Pediatric and Congenital Heart Disease Council, which may reduce measurement bias when calculating Z-scores for the aortic root. In addition, web-based Z-score calculators have been developed to aid in efficient Z-score calculations. Despite these advances, clinicians must be mindful of the limitations of Z-scores, especially when used to demonstrate beneficial treatment effect. This review looks to unravel the mystery of the Z-score, with a focus on the thoracic aorta. Here, we will discuss how Z-scores are calculated and the limitations of their use.

  10. Effect of lod-age care institutions elderly self-care ability on the quality of life%养老机构老年人生活自理能力对生存质量的影响

    Institute of Scientific and Technical Information of China (English)

    王萍

    2014-01-01

    Objectives To investigate the effect of old-age care institutions in daily life of the elderly self-care ability on the quality of life,to explore the methods of improving the elderly self-care ability,so as to improve the quality of life.Methods Using the activity of daily living scale ( ADL) and WHO quality of life scale ( WHOQOL-BREF) questionnaires were administered to 74 elderly patients with senile apartment in Linyi City,and the impact of ADL of the elderly factors by correlation analysis.Results Age,chronic disease and whether to participate in the exercise or entertainment and ADL scores were negatively correlated,the differ-ence was significant;quality of life scale in all areas scores and ADL scores were negatively correlated,the difference was significant and very significant;quality of life of high score group and ADL group the low rating ( except Q1 and psychological field) ,the differ-ence was significant.Conclusions elderly people age,prevalence rate of chronic diseases,self-care ability is poor,the quality of life in general lower level.The relationship between self-care ability and quality of life in close.%目的:了解养老机构老年人日常生活自理能力对生存质量的影响,旨在探寻提高老年人生活自理能力的方法,从而提高其生活质量。方法采用日常生活能力量表( ADL)和世界卫生组织生存质量测定量表简表( WHOQOL-BREF)对入住临沂市老年公寓的74例老年人进行问卷调查,并对影响老年人日常生活自理能力的因素进行相关分析。结果年龄、慢性病及是否参加锻炼或娱乐活动与ADL评分均呈负相关,差异具有显著性;生存质量量表各领域评分与日常生活能力量表总分均呈负相关,差异具有显著性和非常显著性;ADL高评分组与低评分组的生存质量比较(除Q1和心理领域外),差异具有显著性。结论养老机构老年人年龄大、慢性病患病率高、生活自

  11. The relationship between second-year medical students' OSCE scores and USMLE Step 1 scores.

    Science.gov (United States)

    Simon, Steven R; Volkan, Kevin; Hamann, Claus; Duffey, Carol; Fletcher, Suzanne W

    2002-09-01

    The relationship between objective structured clinical examinations (OSCEs) and standardized tests is not well known. We linked second-year medical students' physical diagnosis OSCE scores from 1998, 1999 and 2000 (n = 355) with demographic information, Medical College Admission Test (MCAT) scores, and United States Medical Licensing Examination (USMLE) Step 1 scores. The correlation coefficient for the total OSCE score with USMLE Step 1 score was 0.41 (p USMLE Step 1 score. OSCE station scores accounted for approximately 22% of the variability in USMLE Step 1 scores. A second-year OSCE in physical diagnosis is correlated with scores on the USMLE Step 1 exam, with skills that foreshadow the clinical clerkships most predictive of USMLE scores. This correlation suggests predictive validity of this OSCE and supports the use of OSCEs early in medical school.

  12. Random Walk Picture of Basketball Scoring

    CERN Document Server

    Gabel, Alan

    2011-01-01

    We present evidence, based on play-by-play data from all 6087 games from the 2006/07--2009/10 seasons of the National Basketball Association (NBA), that basketball scoring is well described by a weakly-biased continuous-time random walk. The time between successive scoring events follows an exponential distribution, with little memory between different scoring intervals. Using this random-walk picture that is augmented by features idiosyncratic to basketball, we account for a wide variety of statistical properties of scoring, such as the distribution of the score difference between opponents and the fraction of game time that one team is in the lead. By further including the heterogeneity of team strengths, we build a computational model that accounts for essentially all statistical features of game scoring data and season win/loss records of each team.

  13. Scoring functions for AutoDock.

    Science.gov (United States)

    Hill, Anthony D; Reilly, Peter J

    2015-01-01

    Automated docking allows rapid screening of protein-ligand interactions. A scoring function composed of a force field and linear weights can be used to compute a binding energy from a docked atom configuration. For different force fields or types of molecules, it may be necessary to train a custom scoring function. This chapter describes the data and methods one must consider in developing a custom scoring function for use with AutoDock.

  14. Pneumonia severity scores in resource poor settings

    Directory of Open Access Journals (Sweden)

    Jamie Rylance

    2014-06-01

    Full Text Available Clinical prognostic scores are increasingly used to streamline care in well-resourced settings. The potential benefits of identifying patients at risk of clinical deterioration and poor outcome, delivering appropriate higher level clinical care, and increasing efficiency are clear. In this focused review, we examine the use and applicability of severity scores applied to patients with community acquired pneumonia in resource poor settings. We challenge clinical researchers working in such systems to consider the generalisability of existing severity scores in their populations, and where performance of scores is suboptimal, to promote efforts to develop and validate new tools for the benefit of patients and healthcare systems.

  15. Security Risk Scoring Incorporating Computers' Environment

    Directory of Open Access Journals (Sweden)

    Eli Weintraub

    2016-04-01

    Full Text Available A framework of a Continuous Monitoring System (CMS is presented, having new improved capabilities. The system uses the actual real-time configuration of the system and environment characterized by a Configuration Management Data Base (CMDB which includes detailed information of organizational database contents, security and privacy specifications. The Common Vulnerability Scoring Systems' (CVSS algorithm produces risk scores incorporating information from the CMDB. By using the real updated environmental characteristics the system enables achieving accurate scores compared to existing practices. Framework presentation includes systems' design and an illustration of scoring computations.

  16. Coronary artery calcium score: current status

    Science.gov (United States)

    Neves, Priscilla Ornellas; Andrade, Joalbo; Monção, Henry

    2017-01-01

    The coronary artery calcium score plays an Important role In cardiovascular risk stratification, showing a significant association with the medium- or long-term occurrence of major cardiovascular events. Here, we discuss the following: protocols for the acquisition and quantification of the coronary artery calcium score by multidetector computed tomography; the role of the coronary artery calcium score in coronary risk stratification and its comparison with other clinical scores; its indications, interpretation, and prognosis in asymptomatic patients; and its use in patients who are symptomatic or have diabetes. PMID:28670030

  17. [The cardiovascular surgeon and the Syntax score].

    Science.gov (United States)

    Gómez-Sánchez, Mario; Soulé-Egea, Mauricio; Herrera-Alarcón, Valentín; Barragán-García, Rodolfo

    2015-01-01

    The Syntax score has been established as a tool to determine the complexity of coronary artery disease and as a guide for decision-making among coronary artery bypass surgery and percutaneous coronary intervention. The purpose of this review is to systematically examine what the Syntax score is, and how the surgeon should integrate the information in the selection and treatment of patients. We reviewed the results of the SYNTAX Trial, the clinical practice guidelines, as well as the benefits and limitations of the score. Finally we discuss the future directions of the Syntax score.

  18. Widening clinical applications of the SYNTAX Score.

    Science.gov (United States)

    Farooq, Vasim; Head, Stuart J; Kappetein, Arie Pieter; Serruys, Patrick W

    2014-02-01

    The SYNTAX Score (http://www.syntaxscore.com) has established itself as an anatomical based tool for objectively determining the complexity of coronary artery disease and guiding decision-making between coronary artery bypass graft (CABG) surgery and percutaneous coronary intervention (PCI). Since the landmark SYNTAX (Synergy between PCI with Taxus and Cardiac Surgery) Trial comparing CABG with PCI in patients with complex coronary artery disease (unprotected left main or de novo three vessel disease), numerous validation studies have confirmed the clinical validity of the SYNTAX Score for identifying higher-risk subjects and aiding decision-making between CABG and PCI in a broad range of patient types. The SYNTAX Score is now advocated in both the European and US revascularisation guidelines for decision-making between CABG and PCI as part of a SYNTAX-pioneered heart team approach. Since establishment of the SYNTAX Score, widening clinical applications of this clinical tool have emerged. The purpose of this review is to systematically examine the widening applications of tools based on the SYNTAX Score: (1) by improving the diagnostic accuracy of the SYNTAX Score by adding a functional assessment of lesions; (2) through amalgamation of the anatomical SYNTAX Score with clinical variables to enhance decision-making between CABG and PCI, culminating in the development and validation of the SYNTAX Score II, in which objective and tailored decisions can be made for the individual patient; (3) through assessment of completeness of revascularisation using the residual and post-CABG SYNTAX Scores for PCI and CABG patients, respectively. Finally, the future direction of the SYNTAX Score is covered through discussion of the ongoing development of a non-invasive, functional SYNTAX Score and review of current and planned clinical trials.

  19. On k-hypertournament losing scores

    CERN Document Server

    Pirzada, Shariefuddin

    2010-01-01

    We give a new and short proof of a theorem on k-hypertournament losing scores due to Zhou et al. [G. Zhou, T. Yao, K. Zhang, On score sequences of k-tournaments, European J. Comb., 21, 8 (2000) 993-1000.

  20. ON HOW CULTURAL KNOWLEDGE AFFECTS TOEFL SCORES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper presents a study of the effect of cultur-al background on TOEFL scores.It proceeds from therelation between culture and language,then illus-trates with actual questions from various sections ofTOEFL tests how American cultural background exertsa remarkable influence on TOEFL scores,and con-cludes with revelations with regard to English teachingin this country.

  1. Causal Moderation Analysis Using Propensity Score Methods

    Science.gov (United States)

    Dong, Nianbo

    2012-01-01

    This paper is based on previous studies in applying propensity score methods to study multiple treatment variables to examine the causal moderator effect. The propensity score methods will be demonstrated in a case study to examine the causal moderator effect, where the moderators are categorical and continuous variables. Moderation analysis is an…

  2. Comparability of IQ scores over time

    NARCIS (Netherlands)

    Must, O.; te Nijenhuis, J.; Must, A.; van Vianen, A.E.M.

    2009-01-01

    This study investigates the comparability of IQ scores. Three cohorts (1933/36, 1997/98, 2006) of Estonian students (N = 2173) are compared using the Estonian National Intelligence Test. After 72 years the secular rise of the IQ test scores is.79 SD. The mean .16 SD increase in the last 8 years

  3. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  4. Diagnosis. Severity scoring system for paediatric FMF.

    Science.gov (United States)

    Livneh, Avi

    2012-04-17

    Severity scoring systems for adult familial Mediterranean fever (FMF) are established and used as important clinical and analytical tools in disease management and research. A recent paper highlights the need for a paediatric FMF severity measure. How should such a score be built and what challenges might be faced?

  5. Clinical scoring scales in thyroidology: A compendium

    Directory of Open Access Journals (Sweden)

    Sanjay Kalra

    2011-01-01

    Full Text Available This compendium brings together traditional as well as contemporary scoring and grading systems used for the screening and diagnosis of various thyroid diseases, dysfunctions, and complications. The article discusses scores used to help diagnose hypo-and hyperthyroidism, to grade and manage goiter and ophthalmopathy, and to assess the risk of thyroid malignancy.

  6. Starreveld scoring method in diagnosing childhood constipation

    NARCIS (Netherlands)

    Kokke, F.T.; Sittig, J.S.; de Bruijn, A.; Wiersma, T.; van Rijn, R.R.; Limpen, J.L.; Houwen, R.H.; Fischer, K.; Benninga, M.A.

    2010-01-01

    Four scoring methods exist to assess severity of fecal loading on plain abdominal radiographs in constipated patients (Barr-, Starreveld-, Blethyn- and Leech). So far, the Starreveld score was used only in adult patients. To determine accuracy and intra- and inter-observer agreement of the Starrevel

  7. What do educational test scores really measure?

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    measure of pure cognitive ability. We find that variables which are not closely associated with traditional notions of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture, attitudes...

  8. Analytical Solution of Two-Point Boundary Value Problem for Spacecraft Relative Motion%航天器相对运动的两点边界值问题解析解

    Institute of Scientific and Technical Information of China (English)

    苑云霞; 岳晓奎; 娄云峰

    2011-01-01

    The two-point boundary value problem (TPBVP) of a leader-follower spacecraft formation flying was studied. Aiming at unperturbed elliptical reference orbits, the state transfer matrix representing actual relative position and velocity was derived, and the first-order analytical solution of TPBVP is obtained, which can deal with the problems of the specified rendezvous time, fuel optimization and compromise between fuel and time, and is applicable to the periodic and non-periodic relative motion. The simulation results show that the normalized accuracy of this solution achieves 10~6 level. Furthermore, the fuel cost of relative transfer increases with eccentricity increasing, and decreases with semi-major axis increasing, and appears periodic change with initial true anomaly increasing, and decreases as the transfer time increasing.%针对无摄椭圆轨道,推导了表示真实相对位置速度的状态转移矩阵,进而推导出了相对运动两点边界值问题的一阶解析解.所得结果不仅可指定转移时间、还可在时间范围内进行全局的燃料优化或在时间和燃料两者间折中;对于周期和非周期的相对运动均适用.仿真结果表明此解的归一化精度达到10-6.进一步的仿真发现相对转移过程的燃料消耗会随目标轨道偏心率的增加而增加;随长半轴的增加而减少;随初始真近点角的增加呈现周期性变化;随着转移时间增加,燃料消耗的总趋势是减少的.

  9. Head to Head Comparison of Two Point-of-care Platelet Function Tests Used for Assessment of On-clopidogrel Platelet Reactivity in Chinese Acute Myocardial Infarction Patients Undergoing Percutaneous Coronary Intervention

    Institute of Scientific and Technical Information of China (English)

    Yi Yao; Jia-Hui Zhang; Xiao-Fang Tang; Chen He; Yuan-Liang Ma; Jing-Jing Xu; Ying Song

    2016-01-01

    Background:Platelet function tests are widely used in clinical practice to guide personalized antiplatelet therapy.In China,the thromboelastography (TEG) test has been well accepted in clinics,whereas VerifyNow,mainly used for scientific research,has not been used in routine clinical practice.The aim of the current study was to compare these two point-of-care platelet function tests and to analyze the consistency between the two tests for evaluating on-clopidogrel platelet reactivity in Chinese acute myocardial infarction patients undergoing percutaneous coronary intervention (PCI).Methods:A total of 184 patients admitted to Fuwai Hospital between August 2014 and May 2015 were enrolled in the study.On-clopidogrel platelet reactivity was assessed 3 days after PCI by TEG and VerifyNow using adenosine diphosphate as an agonist.Based on the previous reports,an inhibition of platelet aggregation (IPA) <30% for TEG or a P2Y12 reaction unit (PRU) >230 for VerifyNow was defined as high on-clopidogrel platelet reactivity (HPR).An IPA >70% or a PRU <178 was defined as low on-clopidogrel platelet reactivity (LPR).Correlation and agreement between the two methods were analyzed using the Spearman correlation coefficient (r) and kappa value (κ),respectively.Results:Our results showed that VerifyNow and TEG had a moderate but significant correlation in evaluating platelet reactivity (r =-0.511).A significant although poor agreement (κ =0.225) in identifying HPR and a significantly moderate agreement in identifying LPR (κ =0.412) were observed between TEG and VerifyNow.By using TEG as the reference for comparison,the cutoffvalues of VerifyNow for the Chinese patients in this study were identified as PRU >205 for HPR and PRU <169 for LPR.Conclusions:By comparing VerifyNow to TEG which has been widely used in clinics,VerifyNow could be an attractive alternative to TEG for monitoring on-clopidogrel platelet reactivity in Chinese patients.

  10. Comparison of T1-weighted 2D TSE, 3D SPGR, and two-point 3D Dixon MRI for automated segmentation of visceral adipose tissue at 3 Tesla.

    Science.gov (United States)

    Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin

    2017-04-01

    To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m(2) (30.02 ± 6.63 kg/m(2)) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.

  11. 基于两点式步行吸引盆计算的稳定性分析%Stability analysis of attraction basin in two-point-foot walking pattern

    Institute of Scientific and Technical Information of China (English)

    钟林枫; 罗翔; 郭晨光

    2011-01-01

    A new walking pattern denoted as two-point-foot walking pattern is proposed according to the foot structure of human being. The equations of a planar straight leg-inverted pendulum based on this pattern are derived to study the stability of gait and the energy consumption in biped walking. A new kind of numerical algorithm is put forward according to Poincare mapping. The fixed points and the basin of attraction of this model are obtained by the numerical method. The effects of λ and step variation on the basin of attraction and energy consumption are discussed for the biped walking. The area of basin of attraction decreases with the increase of coefficient λ; meanwhile, the speed converging to the fixed point accelerates; the ratio of initial conditions which can converge to the fixed point quickly increases. On the other hand, the area of basin of attraction increases with the step length;so that more energy would be consumed for unit distance of the model's walk. A parameters combination that λ is 0. 55 and step is 0. 50 is of advantage for the control of robot.%为了研究两足步行过程中步态稳定性及步行能耗的问题,通过对人脚结构的分析提出了一种新型的步行模式--两点式步行模式,并建立了基于此模式的直腿无膝-倒立摆动力学模型.运用Poincare映射法,提出了一种数值算法,计算了该直腿无膝模型的不动点及吸引盆;研究了模型特性参数λ、步长对吸引盆特性及步行能耗的影响.系数λ增大会使吸引盆面积减小,收敛到不动点的速度变快,快速收敛到不动点的初始条件比例变大.步长增大会使吸引盆面积增大,模型在步行过程中单位距离消耗的能量增加.结果表明,λ为0.55和步长为0.50的参数组合更有利于对机器人系统的控制.

  12. Propensity score weighting with multilevel data.

    Science.gov (United States)

    Li, Fan; Zaslavsky, Alan M; Landrum, Mary Beth

    2013-08-30

    Propensity score methods are being increasingly used as a less parametric alternative to traditional regression to balance observed differences across groups in both descriptive and causal comparisons. Data collected in many disciplines often have analytically relevant multilevel or clustered structure. The propensity score, however, was developed and has been used primarily with unstructured data. We present and compare several propensity-score-weighted estimators for clustered data, including marginal, cluster-weighted, and doubly robust estimators. Using both analytical derivations and Monte Carlo simulations, we illustrate bias arising when the usual assumptions of propensity score analysis do not hold for multilevel data. We show that exploiting the multilevel structure, either parametrically or nonparametrically, in at least one stage of the propensity score analysis can greatly reduce these biases. We applied these methods to a study of racial disparities in breast cancer screening among beneficiaries of Medicare health plans.

  13. A Bayesian Approach to Learning Scoring Systems.

    Science.gov (United States)

    Ertekin, Şeyda; Rudin, Cynthia

    2015-12-01

    We present a Bayesian method for building scoring systems, which are linear models with coefficients that have very few significant digits. Usually the construction of scoring systems involve manual effort-humans invent the full scoring system without using data, or they choose how logistic regression coefficients should be scaled and rounded to produce a scoring system. These kinds of heuristics lead to suboptimal solutions. Our approach is different in that humans need only specify the prior over what the coefficients should look like, and the scoring system is learned from data. For this approach, we provide a Metropolis-Hastings sampler that tends to pull the coefficient values toward their "natural scale." Empirically, the proposed method achieves a high degree of interpretability of the models while maintaining competitive generalization performances.

  14. A comparison between modified Alvarado score and RIPASA score in the diagnosis of acute appendicitis.

    Science.gov (United States)

    Singla, Anand; Singla, Satpaul; Singh, Mohinder; Singla, Deeksha

    2016-12-01

    Acute appendicitis is a common but elusive surgical condition and remains a diagnostic dilemma. It has many clinical mimickers and diagnosis is primarily made on clinical grounds, leading to the evolution of clinical scoring systems for pin pointing the right diagnosis. The modified Alvarado and RIPASA scoring systems are two important scoring systems, for diagnosis of acute appendicitis. We prospectively compared the two scoring systems for diagnosing acute appendicitis in 50 patients presenting with right iliac fossa pain. The RIPASA score correctly classified 88 % of patients with histologically confirmed acute appendicitis compared with 48.0 % with modified Alvarado score, indicating that RIPASA score is more superior to Modified Alvarado score in our clinical settings.

  15. THE EFFICIENCY OF TENNIS DOUBLES SCORING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Geoff Pollard

    2010-09-01

    Full Text Available In this paper a family of scoring systems for tennis doubles for testing the hypothesis that pair A is better than pair B versus the alternative hypothesis that pair B is better than A, is established. This family or benchmark of scoring systems can be used as a benchmark against which the efficiency of any doubles scoring system can be assessed. Thus, the formula for the efficiency of any doubles scoring system is derived. As in tennis singles, one scoring system based on the play-the-loser structure is shown to be more efficient than the benchmark systems. An expression for the relative efficiency of two doubles scoring systems is derived. Thus, the relative efficiency of the various scoring systems presently used in doubles can be assessed. The methods of this paper can be extended to a match between two teams of 2, 4, 8, …doubles pairs, so that it is possible to establish a measure for the relative efficiency of the various systems used for tennis contests between teams of players.

  16. Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data

    Directory of Open Access Journals (Sweden)

    Jacqueline L. Gauer

    2016-09-01

    Full Text Available Introduction: The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE Step 1 and Step 2 Clinical Knowledge (CK scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams. Method: Multiple linear regression, correlation, and chi-square analyses were performed to determine the relationship between MCAT component and composite scores and USMLE Step 1 and Step 2 CK scores from five graduating classes (2011–2015 at the University of Minnesota Medical School (N=1,065. Results: The multiple linear regression analyses were both significant (p<0.001. The three MCAT component scores together explained 17.7% of the variance in Step 1 scores (p<0.001 and 12.0% of the variance in Step 2 CK scores (p<0.001. In the chi-square analyses, significant, albeit weak associations were observed between almost all MCAT component scores and USMLE scores (Cramer's V ranged from 0.05 to 0.24. Discussion: Each of the MCAT component scores was significantly associated with USMLE Step 1 and Step 2 CK scores, although the effect size was small. Being in the top or bottom scoring range of the MCAT exam was predictive of being in the top or bottom scoring range of the USMLE exams, although the strengths of the associations were weak to moderate. These results indicate that MCAT scores are predictive of student performance on the USMLE exams, but, given the small effect sizes, should be considered as part of the holistic view of the student.

  17. Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data

    OpenAIRE

    Gauer, Jacqueline L.; Wolff, Josephine M.; Jackson, J. Brooks

    2016-01-01

    Introduction: The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT) component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams.Method: Multiple linear regression, correlation, and chi-square analyses were performed to determi...

  18. Kernel score statistic for dependent data.

    Science.gov (United States)

    Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike

    2014-01-01

    The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.

  19. Facilitating the Interpretation of English Language Proficiency Scores: Combining Scale Anchoring and Test Score Mapping Methodologies

    Science.gov (United States)

    Powers, Donald; Schedl, Mary; Papageorgiou, Spiros

    2017-01-01

    The aim of this study was to develop, for the benefit of both test takers and test score users, enhanced "TOEFL ITP"® test score reports that go beyond the simple numerical scores that are currently reported. To do so, we applied traditional scale anchoring (proficiency scaling) to item difficulty data in order to develop performance…

  20. Virtual Expression and LOD Model Designing for Theme Park Based on Semantic features:A Case of the Water and Soil Conservation Technology Park%主题公园多细节层次景观语义模型与虚拟表述——以北京延庆县水土保持科技示范园为例

    Institute of Scientific and Technical Information of China (English)

    李仁杰; 路紫

    2011-01-01

    The virtual expression for theme park, using the virtual reality (VR) technology, has achieved not only for achieving a high degree of realism for every element of the landscape in appearance texture, but also for showing landscape construction, landscape evolution and the man-land relationship and describing the geo-spatial pattern. The latter is paid less attention by researchers. Designing the landscape model based on semantic features in Virtual Geographic Environments (VGE) is a good idea to achieve the two-level modeling, and a case of a water and soil conservation technology park, located in the Yanqing County, Beijing, China, is selected to demonstrate the idea. The water and soil conservation technology park is a special form of theme park, which is designed for the experiments of water and soil conservation technology, popular science education of protection of ecological environment, and leisure and recreation activities. But the park' s functions are greatly restricted by the park's area, location and ecological capacity. So it cannot satisfy the multi functional needs for the education of ecological environment protection, technology demonstration and ecotourism development. The authors design a classification system of themes and virtual objects in the water and soil conservation technology park, and build a layer of details (LOD) model for describing the theme park in the computer virtual environment based on semantic context of the ecological landscape. The LOD model can show the features and landscapes of the theme park at different view scales such as the whole view, middle scale view, and some special partial views,even a special feature view in the virtual environment. The LOD model can also construct the virtual environment based on different themes and functions or design a special sight-seeing route by the describing of different scale LOD models and other landscape features together. This case study is done in the ArcGIS 9.2 and the Skyline

  1. Comparison of two point-of-care testing (POCT) methods for glycated haemoglobin(HbA1c) testing%糖化血红蛋白两种床旁检测方法性能的比较

    Institute of Scientific and Technical Information of China (English)

    王薇; 杨雪; 胡丽涛; 王治国

    2012-01-01

    目的 比较两种H bA1c床旁检测方法的性能.方法 通过对已发表的两种HbA1c床旁检测方法(POC A,POC B)与两种中心实验室仪器(Central X,Central Y)验证试验的数据分析得到两种HbA1c床旁检测仪器与中心实验室仪器相比的不精密度(CV%)、偏倚(Bias%),并将其与质量要求进行比较.结果 POC A测定HbA1c在6.0%和10.4%浓度水平时的变异系数(CV)分别为3.8%、3.7%,与中心实验室仪器(Central X)比对测得的偏倚分别为10.33%、13.58%,与中心实验室仪器(Central Y)比对得到的偏倚分别为7.83%、10.44%;POCB测定HbA1 c在5.5%和11.9%浓度水平时的CV分别为3.4%、7.3%,与CentralX比对测得的偏倚分别为0.45%、11.08%,与CentralY比对得到的偏倚分别为3.73%、9.57%.结论 两种HbA1c床旁检测方法的性能不能满足目前对HbA1 c检测的质量要求.%Objective To evaluate the performance of two point-of-care testing (POCT) methods for HbA1c testing. Methods Using published data from a study, the two POCT HbA1c devices with two central laboratory instruments for HbA1c measurement were compared. The imprecision bias for the two methods of POC A and POC B was determined by verification tests with two central laboratory instruments of Central X and Central Y, from the published data, and then was compared with the requirement of quality. Results At HbA, c level 6. 0% and 10. 4%, the total CV of POC A was 3. 8% and 3. 7% respectively and when comparing with Central X, the bias of POC A was 10. 33% and 13. 58%. Comparing with Central Y, the bias was 7. 83% and 10. 44%. At A,C level 5. 5% and 11. 9% the total CV of POC B was 3. 4%, 7. 3% respectively, and when comparing with Central X, the bias was 0. 45% and 11. 08%. Comparing with Central Y, the bias was 3. 73% and 9. 57%respectively. Conclusions The performance of the two methods for HbA1c testing cannot meet the current quality requirements for the HbA1c testing .

  2. GMAT Scores of Undergraduate Economics Majors

    Science.gov (United States)

    Nelson, Paul A.; Monson, Terry D.

    2008-01-01

    The average score of economics majors on the Graduate Management Admission Test (GMAT) exceeds those of nearly all humanities and arts, social sciences, and business undergraduate majors but not those of most science, engineering, and mathematics majors. (Contains 1 table.)

  3. GMAT Scores of Undergraduate Economics Majors

    Science.gov (United States)

    Nelson, Paul A.; Monson, Terry D.

    2008-01-01

    The average score of economics majors on the Graduate Management Admission Test (GMAT) exceeds those of nearly all humanities and arts, social sciences, and business undergraduate majors but not those of most science, engineering, and mathematics majors. (Contains 1 table.)

  4. Surgical Apgar Score Predicts Postoperative Complications in ...

    African Journals Online (AJOL)

    neurotrauma patients by using an effective scoring system can reduce ... complications was 7.04 while for patients with complications was ... their SAS for purposes of risk stratification; high risk. (0-4), medium .... Deep Venous. Thrombosis. 0.

  5. Multifactor Screener in OPEN: Scoring Procedures & Results

    Science.gov (United States)

    Scoring procedures were developed to convert a respondent's screener responses to estimates of individual dietary intake for percentage energy from fat, grams of fiber, and servings of fruits and vegetables.

  6. Film scoring today - Theory, practice and analysis

    OpenAIRE

    Flach, Paula Sophie

    2012-01-01

    This thesis considers film scoring by taking a closer look at the theoretical discourse throughout the last decades, examining current production practice of film music and showcasing a musical analysis of the film Inception (2010).

  7. Knee Injury and Osteoarthritis Outcome Score (KOOS)

    DEFF Research Database (Denmark)

    Collins, N J; Prinsen, C A C; Christensen, R

    2016-01-01

    OBJECTIVE: To conduct a systematic review and meta-analysis to synthesize evidence regarding measurement properties of the Knee injury and Osteoarthritis Outcome Score (KOOS). DESIGN: A comprehensive literature search identified 37 eligible papers evaluating KOOS measurement properties in partici...

  8. Cardiovascular risk score in Rheumatoid Arthritis

    Science.gov (United States)

    Wagan, Abrar Ahmed; Mahmud, Tafazzul E Haque; Rasheed, Aflak; Zafar, Zafar Ali; Rehman, Ata ur; Ali, Amjad

    2016-01-01

    Objective: To determine the 10-year Cardiovascular risk score with QRISK-2 and Framingham risk calculators in Rheumatoid Arthritis and Non Rheumatoid Arthritis subjects and asses the usefulness of QRISK-2 and Framingham calculators in both groups. Methods: During the study 106 RA and 106 Non RA patients age and sex matched participants were enrolled from outpatient department. Demographic data and questions regarding other study parameters were noted. After 14 hours of fasting 5 ml of venous blood was drawn for Cholesterol and HDL levels, laboratory tests were performed on COBAS c III (ROCHE). QRISK-2 and Framingham risk calculators were used to get individual 10-year CVD risk score. Results: In this study the mean age of RA group was (45.1±9.5) for Non RA group (43.7±8.2), with female gender as common. The mean predicted 10-year score with QRISK-2 calculator in RA group (14.2±17.1%) and Non RA group was (13.2±19.0%) with (p-value 0.122). The 10-year score with Framingham risk score in RA group was (12.9±10.4%) and Non RA group was (8.9±8.7%) with (p-value 0.001). In RA group QRISK-2 (24.5%) and FRS (31.1%) cases with predicted score were in higher risk category. The maximum agreement scores between both calculators was observed in both groups (Kappa = 0.618 RA Group; Kappa = 0.671 Non RA Group). Conclusion: QRISK-2 calculator is more appropriate as it takes RA, ethnicity, CKD, and Atrial fibrillation as factors in risk assessment score. PMID:27375684

  9. Use score card to boost quality.

    Science.gov (United States)

    2002-10-01

    Keeping a score card can identify problem areas and track improvements. When specific goals are reached, staff are given rewards such as thank-you letters, tokens, or pizza parties. Staff are kept informed about the results of the score card through bulletin board postings, staff meetings, and the hospital Intranet. Data are collected with manual entry by nursing staff, chart review by performance improvement, and a computerized program.

  10. Pharmacophore-based similarity scoring for DOCK.

    Science.gov (United States)

    Jiang, Lingling; Rizzo, Robert C

    2015-01-22

    Pharmacophore modeling incorporates geometric and chemical features of known inhibitors and/or targeted binding sites to rationally identify and design new drug leads. In this study, we have encoded a three-dimensional pharmacophore matching similarity (FMS) scoring function into the structure-based design program DOCK. Validation and characterization of the method are presented through pose reproduction, crossdocking, and enrichment studies. When used alone, FMS scoring dramatically improves pose reproduction success to 93.5% (∼20% increase) and reduces sampling failures to 3.7% (∼6% drop) compared to the standard energy score (SGE) across 1043 protein-ligand complexes. The combined FMS+SGE function further improves success to 98.3%. Crossdocking experiments using FMS and FMS+SGE scoring, for six diverse protein families, similarly showed improvements in success, provided proper pharmacophore references are employed. For enrichment, incorporating pharmacophores during sampling and scoring, in most cases, also yield improved outcomes when docking and rank-ordering libraries of known actives and decoys to 15 systems. Retrospective analyses of virtual screenings to three clinical drug targets (EGFR, IGF-1R, and HIVgp41) using X-ray structures of known inhibitors as pharmacophore references are also reported, including a customized FMS scoring protocol to bias on selected regions in the reference. Overall, the results and fundamental insights gained from this study should benefit the docking community in general, particularly researchers using the new FMS method to guide computational drug discovery with DOCK.

  11. Introducing the SKIN score: a validated scoring system to assess severity of mastectomy skin flap necrosis.

    Science.gov (United States)

    Lemaine, Valerie; Hoskin, Tanya L; Farley, David R; Grant, Clive S; Boughey, Judy C; Torstenson, Tiffany A; Jacobson, Steven R; Jakub, James W; Degnim, Amy C

    2015-09-01

    With increasing use of immediate breast reconstruction (IBR), mastectomy skin flap necrosis (MSFN) is a clinical problem that deserves further study. We propose a validated scoring system to discriminate MSFN severity and standardize its assessment. Women who underwent skin-sparing (SSM) or nipple-sparing mastectomy (NSM) and IBR from November 2009 to October 2010 were studied retrospectively. A workgroup of breast and plastic surgeons scored postoperative photographs using the skin ischemia necrosis (SKIN) score to assess depth and surface area of MSFN. We evaluated correlation of the SKIN score with reoperation for MSFN and its reproducibility in an external sample of surgeons. We identified 106 subjects (175 operated breasts: 103 SSM, 72 NSM) who had ≥1 postoperative photograph within 60 days. SKIN scores correlated strongly with need for reoperation for MSFN, with an AUC of 0.96 for SSM and 0.89 for NSM. External scores agreed well with the gold standard scores for the breast mound photographs with weighted kappa values of 0.82 (depth), 0.56 (surface area), and 0.79 (composite score). The agreement was similar for the nipple-areolar complex photographs: 0.75 (depth), 0.63 (surface area), and 0.79 (composite score). A simple scoring system to assess the severity of MSFN is proposed, incorporating both depth and surface area of MSFN. The SKIN score correlates strongly with the need for reoperation to manage MSFN and is reproducible among breast and plastic surgeons.

  12. GalaxyDock BP2 score: a hybrid scoring function for accurate protein-ligand docking

    Science.gov (United States)

    Baek, Minkyung; Shin, Woong-Hee; Chung, Hwan Won; Seok, Chaok

    2017-07-01

    Protein-ligand docking is a useful tool for providing atomic-level understanding of protein functions in nature and design principles for artificial ligands or proteins with desired properties. The ability to identify the true binding pose of a ligand to a target protein among numerous possible candidate poses is an essential requirement for successful protein-ligand docking. Many previously developed docking scoring functions were trained to reproduce experimental binding affinities and were also used for scoring binding poses. However, in this study, we developed a new docking scoring function, called GalaxyDock BP2 Score, by directly training the scoring power of binding poses. This function is a hybrid of physics-based, empirical, and knowledge-based score terms that are balanced to strengthen the advantages of each component. The performance of the new scoring function exhibits significant improvement over existing scoring functions in decoy pose discrimination tests. In addition, when the score is used with the GalaxyDock2 protein-ligand docking program, it outperformed other state-of-the-art docking programs in docking tests on the Astex diverse set, the Cross2009 benchmark set, and the Astex non-native set. GalaxyDock BP2 Score and GalaxyDock2 with this score are freely available at http://galaxy.seoklab.org/softwares/galaxydock.html.

  13. Comparison of Single-point and Two-point Difference Track Initiation Algorithms Using Position Measurements%利用位置测量的单点和两点差分跟踪起始算法的比较研究

    Institute of Scientific and Technical Information of China (English)

    MALLICK Mahendra; LA SCALA Barbara

    2008-01-01

    We consider the problem of initializing the tracking filter of a target moving with nearly constant velocity when positiononly (1D, 2D, or 3D) measurements are available. It is known that the Kalman filter is optimal for such a problem, provided it is correctly initialized. We compare a single-point and the well-known two-point difference track initialization algorithms. We analytically show that if the process noise approaches zero and the maximum speed of a target used to initialize the velocity variance approaches infinity, then the single-point algorithm reduces to the two-point difference algorithm. We present numerical results that show that the single-point algorithm performs consistently better than the two-point difference algorithm in the mean square error sense. We also present analytical results that support the conjecture that this is true in general.

  14. Heart valve surgery: EuroSCORE vs. EuroSCORE II vs. Society of Thoracic Surgeons score

    Directory of Open Access Journals (Sweden)

    Muhammad Sharoz Rabbani

    2014-12-01

    Full Text Available Background This is a validation study comparing the European System for Cardiac Operative Risk Evaluation (EuroSCORE II with the previous additive (AES and logistic EuroSCORE (LES and the Society of Thoracic Surgeons’ (STS risk prediction algorithm, for patients undergoing valve replacement with or without bypass in Pakistan. Patients and Methods Clinical data of 576 patients undergoing valve replacement surgery between 2006 and 2013 were retrospectively collected and individual expected risks of death were calculated by all four risk prediction algorithms. Performance of these risk algorithms was evaluated in terms of discrimination and calibration. Results There were 28 deaths (4.8% among 576 patients, which was lower than the predicted mortality of 5.16%, 6.96% and 4.94% by AES, LES and EuroSCORE II but was higher than 2.13% predicted by STS scoring system. For single and double valve replacement procedures, EuroSCORE II was the best predictor of mortality with highest Hosmer and Lemmeshow test (H-L p value (0.346 to 0.689 and area under the receiver operating characteristic (ROC curve (0.637 to 0.898. For valve plus concomitant coronary artery bypass grafting (CABG patients actual mortality was 1.88%. STS calculator came out to be the best predictor of mortality for this subgroup with H-L p value (0.480 to 0.884 and ROC (0.657 to 0.775. Conclusions For Pakistani population EuroSCORE II is an accurate predictor for individual operative risk in patients undergoing isolated valve surgery, whereas STS performs better in the valve plus CABG group.

  15. RISK FACTOR DIAGNOSTIC SCORE IN DIABETIC FOOT

    Directory of Open Access Journals (Sweden)

    Mohamed Shameem P. M

    2016-09-01

    Full Text Available INTRODUCTION Diabetic foot ulcers vary in their clinical presentation and nature of severity and therefore create a challenging problem to the treating surgeon regarding the prediction of the clinical course and the end result of the treatment. Clinical studies have shown that there are certain risk factors for the progression of foot ulcers in diabetics and it may therefore be possible to predict the course of an ulcer foot at presentation itself, thus instituting proper therapy without delay. Spoken otherwise clinical scoring may tell that this particular ulcer is having highest chance of amputation, then one may be able to take an early decision for the same and avoid the septic complications, inconvenience to the patient, long hospital stay and cost of treatments. AIM OF THE STUDY Aim of the study is to evaluate the above-mentioned scoring system in predicting the course the diabetic foot ulcers. MATERIALS AND METHODS 50 patients with Diabetic Foot attending the OPD of Department of Surgery of Government Hospital attached to Calicut Medical College are included in the present study. After thorough history taking and clinical examination, six risk factors like Age, pedal vessels, renal function, neuropathy, radiological findings and ulcers were observed in the patients by giving certain scoring points to each of them. The total number of points scored by the patients at the time of admission or OPD treatment was correlated with the final outcome in these patients, whether leading to amputation or conservative management. All the data was analysed using standard statistical methods. OBSERVATIONS AND RESULTS There were 12 females and 38 males with a female to male ratio 1:3.1. All were aged above 30 years. Twenty-four (48% of them were between 30-60 years and twenty six (52% were above 60 years. 10 patients were treated conservatively with risk score range: 10 to 35. Six had single toe loss with risk score: 25 to 35. Six had multiple toe loss

  16. A scoring framework for predicting protein structures

    Science.gov (United States)

    Zou, Xiaoqin

    2013-03-01

    We have developed a statistical mechanics-based iterative method to extract statistical atomic interaction potentials from known, non-redundant protein structures. Our method circumvents the long-standing reference state problem in deriving traditional knowledge-based scoring functions, by using rapid iterations through a physical, global convergence function. The rapid convergence of this physics-based method, unlike other parameter optimization methods, warrants the feasibility of deriving distance-dependent, all-atom statistical potentials to keep the scoring accuracy. The derived potentials, referred to as ITScore/Pro, have been validated using three diverse benchmarks: the high-resolution decoy set, the AMBER benchmark decoy set, and the CASP8 decoy set. Significant improvement in performance has been achieved. Finally, comparisons between the potentials of our model and potentials of a knowledge-based scoring function with a randomized reference state have revealed the reason for the better performance of our scoring function, which could provide useful insight into the development of other physical scoring functions. The potentials developed in the present study are generally applicable for structural selection in protein structure prediction.

  17. SCORE SETS IN ORIENTED 3-PARTITE GRAPHS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Let D(U, V, W) be an oriented 3-partite graph with |U|=p, |V|=q and |W|= r. For any vertex x in D(U, V, W), let d+x and d-x be the outdegree and indegree of x respectively. Define aui (or simply ai) = q + r + d+ui - d-ui, bvj(or simply bj) = p + r + d+vj - d-vj and Cwk (or simply ck) = p + q + d+wk - d-wk as the scores of ui in U, vj in V and wk in Wrespectively. The set A of distinct scores of the vertices of D(U, V, W) is called its score set. In this paper, we prove that if a1 is a non-negative integer, ai(2≤i≤n - 1) are even positive integers and an is any positive integer, then for n≥3, there exists an oriented 3-partite graph with the score set A = {a1,2∑i=1 ai,…,n∑i=1 ai}, except when A = {0,2,3}. Some more results for score sets in oriented 3-partite graphs are obtained.

  18. Disease severity scoring systems in dermatology

    Directory of Open Access Journals (Sweden)

    Cemal Bilaç

    2016-06-01

    Full Text Available Scoring systems have been developed to interpret the disease severity objectively by evaluating the parameters of the disease. Body surface area, visual analogue scale, and physician global assessment are the most frequently used scoring systems for evaluating the clinical severity of the dermatological diseases. Apart from these scoring systems, many specific scoring systems for many dermatological diseases, including acne (acne vulgaris, acne scars, alopecia (androgenetic alopecia, tractional alopecia, bullous diseases (autoimmune bullous diseases, toxic epidermal necrolysis, dermatitis (atopic dermatitis, contact dermatitis, dyshidrotic eczema, hidradenitis suppurativa, hirsutismus, connective tissue diseases (dermatomyositis, skin involvement of systemic lupus erythematosus (LE, discoid LE, scleroderma, lichen planoplaris, mastocytosis, melanocytic lesions, melasma, onychomycosis, oral lichen planus, pityriasis rosea, psoriasis (psoriasis vulgaris, psoriatic arthritis, nail psoriasis, sarcoidosis, urticaria, and vitiligo, have also been developed. Disease severity scoring methods are ever more extensively used in the field of dermatology for clinical practice to form an opinion about the prognosis by determining the disease severity; to decide on the most suitable treatment modality for the patient; to evaluate the efficacy of the applied medication; and to compare the efficiency of different treatment methods in clinical studies.

  19. Gambling scores for earthquake predictions and forecasts

    Science.gov (United States)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  20. [Overview of regulatory aspects guiding tablet scoring].

    Science.gov (United States)

    Teixeira, Maíra Teles; Sá-Barreto, Lívia Cristina Lira; Silva, Dayde Lane Mendonça; Cunha-Filho, Marcílio Sergio Soares

    2016-06-01

    Tablet scoring is a controversial but common practice used to adjust doses, facilitate drug intake, or lower the cost of drug treatment, especially in children and the elderly. The risks of tablet scoring are mainly related to inaccuracies in the resulting dose and stability problems. The aim of this article is to provide an overview of worldwide guidelines regarding tablet scoring. We found that regulatory health agencies in Mercosur countries as well as other South American countries do not have published standards addressing tablet splitting. Among the surveyed health agencies, the Food and Drug Administration (FDA) in the United States is the only one to present standards, ranging from splitting instructions to regulation of the manufacturing process. The concept of functional scoring implemented by the FDA has introduced some level of guarantee as to the ability of tablets to be split. In conclusion, technical and scientific bases are still insufficient to guide health rules on this subject, making the decision on scoring, in certain situations, random and highly risky to public health. The need for more detailed regulation is vital to ensure the safety of tablet medications.

  1. Development of a Mediterranean diet score adapted to Japan and its relation to obesity risk

    Directory of Open Access Journals (Sweden)

    Masao Kanauchi

    2016-11-01

    Full Text Available Background: The Mediterranean diet (MD is well known as a healthy diet that protects against several chronic diseases. However, there is no appropriate and easy index to assess adherence to the MD pattern in Japan. Objective: The aim of this study was to develop a novel instrument to measure MD adherence adapted to a Japanese diet and to examine its association with overweight/obesity risk. Methods: A cross-sectional nutritional survey provided the data for construction of a novel MD score. In total, 1,048 subjects who were employees and university students, aged 18–68 years (645 men and 403 women, completed a 58-item brief-type self-administered dietary history questionnaire. We constructed a Japanese-adapted MD score (jMD score focusing on 13 components. Adherence to the jMD was categorized as low (score 0–4, moderate (5–7, or high (8–13. Results: Men had higher jMD scores than women, and adherence to the jMD score increased with age. Only 11.6% of subjects showed high adherence to the jMD, whereas 29.6% showed low adherence. A higher jMD adherence was associated with a higher intake of favorable nutrients with the exception of salt. The jMD adherence was significantly associated with a reduced likelihood of having overweight/obesity for the highest category compared with lowest category (odds ratio [OR] 0.50, 95% confidence interval [CI] 0.30–0.85, p-trend=0.017 after adjusting for age, sex, smoking, physical activity, alcohol intake, and hypertension. A two-point increment in jMD score was related to a reduced likelihood of having overweight/obesity with an odds ratio of 0.76 (95% CI 0.65–0.90, p=0.002. Conclusions: Our novel jMD score confirmed reasonable associations with nutrient intakes, and higher MD adherence was associated with a lower prevalence of overweight/obesity.

  2. Development of a Mediterranean diet score adapted to Japan and its relation to obesity risk.

    Science.gov (United States)

    Kanauchi, Masao; Kanauchi, Kimiko

    2016-01-01

    The Mediterranean diet (MD) is well known as a healthy diet that protects against several chronic diseases. However, there is no appropriate and easy index to assess adherence to the MD pattern in Japan. The aim of this study was to develop a novel instrument to measure MD adherence adapted to a Japanese diet and to examine its association with overweight/obesity risk. A cross-sectional nutritional survey provided the data for construction of a novel MD score. In total, 1,048 subjects who were employees and university students, aged 18-68 years (645 men and 403 women), completed a 58-item brief-type self-administered dietary history questionnaire. We constructed a Japanese-adapted MD score (jMD score) focusing on 13 components. Adherence to the jMD was categorized as low (score 0-4), moderate (5-7), or high (8-13). Men had higher jMD scores than women, and adherence to the jMD score increased with age. Only 11.6% of subjects showed high adherence to the jMD, whereas 29.6% showed low adherence. A higher jMD adherence was associated with a higher intake of favorable nutrients with the exception of salt. The jMD adherence was significantly associated with a reduced likelihood of having overweight/obesity for the highest category compared with lowest category (odds ratio [OR] 0.50, 95% confidence interval [CI] 0.30-0.85, p-trend=0.017) after adjusting for age, sex, smoking, physical activity, alcohol intake, and hypertension. A two-point increment in jMD score was related to a reduced likelihood of having overweight/obesity with an odds ratio of 0.76 (95% CI 0.65-0.90, p=0.002). Our novel jMD score confirmed reasonable associations with nutrient intakes, and higher MD adherence was associated with a lower prevalence of overweight/obesity.

  3. Prognostic Value of TIMI Score versus GRACE Score in ST-segment Elevation Myocardial Infarction

    Directory of Open Access Journals (Sweden)

    Luis C. L. Correia

    2014-08-01

    Full Text Available Background: The TIMI Score for ST-segment elevation myocardial infarction (STEMI was created and validated specifically for this clinical scenario, while the GRACE score is generic to any type of acute coronary syndrome. Objective: Between TIMI and GRACE scores, identify the one of better prognostic performance in patients with STEMI. Methods: We included 152 individuals consecutively admitted for STEMI. The TIMI and GRACE scores were tested for their discriminatory ability (C-statistics and calibration (Hosmer-Lemeshow in relation to hospital death. Results: The TIMI score showed equal distribution of patients in the ranges of low, intermediate and high risk (39 %, 27 % and 34 %, respectively, as opposed to the GRACE Score that showed predominant distribution at low risk (80 %, 13 % and 7%, respectively. Case-fatality was 11%. The C-statistics of the TIMI score was 0.87 (95%CI = 0.76 to 0.98, similar to GRACE (0.87, 95%CI = 0.75 to 0.99 - p = 0.71. The TIMI score showed satisfactory calibration represented by χ2 = 1.4 (p = 0.92, well above the calibration of the GRACE score, which showed χ2 = 14 (p = 0.08. This calibration is reflected in the expected incidence ranges for low, intermediate and high risk, according to the TIMI score (0 %, 4.9 % and 25 %, respectively, differently to GRACE (2.4%, 25% and 73%, which featured middle range incidence inappropriately. Conclusion: Although the scores show similar discriminatory capacity for hospital death, the TIMI score had better calibration than GRACE. These findings need to be validated populations of different risk profiles.

  4. Evaluation of the "medication fall risk score".

    Science.gov (United States)

    Yazdani, Cyrus; Hall, Scott

    2017-01-01

    Results of a study evaluating the predictive validity of a fall screening tool in hospitalized patients are reported. Administrative claims data from two hospitals were analyzed to determine the discriminatory ability of the "medication fall risk score" (RxFS), a medication review fall-risk screening tool that is designed for use in conjunction with nurse-administered tools such as the Morse Fall Scale (MFS). Through analysis of data on administered medications and documented falls in a population of adults who underwent fall-risk screening at hospital admission over a 15-month period (n = 33,058), the predictive value of admission MFS scores, alone or in combination with retrospectively calculated RxFS-based risk scores, was assessed. Receiver operating characteristic (ROC) curve analysis and net reclassification improvement (NRI) analysis were used to evaluate improvements in risk prediction with the addition of RxFS data to the prediction model. The area under the ROC curve for the predictive model for falls compromising both MFS and RxFS scores was computed as 0.8014, which was greater than the area under the ROC curve associated with use of the MFS alone (0.7823, p = 0.0030). Screening based on MFS scores alone had 81.25% sensitivity and 61.37% specificity. Combined use of RxFS and MFS scores resulted in 82.42% sensitivity and 66.65% specificity (NRI = 0.0587, p = 0.0003). Reclassification of fall risk based on coadministration of the MFS and the RxFS tools resulted in a modest improvement in specificity without compromising sensitivity. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  5. NCACO-score: An effective main-chain dependent scoring function for structure modeling

    Directory of Open Access Journals (Sweden)

    Dong Xiaoxi

    2011-05-01

    Full Text Available Abstract Background Development of effective scoring functions is a critical component to the success of protein structure modeling. Previously, many efforts have been dedicated to the development of scoring functions. Despite these efforts, development of an effective scoring function that can achieve both good accuracy and fast speed still presents a grand challenge. Results Based on a coarse-grained representation of a protein structure by using only four main-chain atoms: N, Cα, C and O, we develop a knowledge-based scoring function, called NCACO-score, that integrates different structural information to rapidly model protein structure from sequence. In testing on the Decoys'R'Us sets, we found that NCACO-score can effectively recognize native conformers from their decoys. Furthermore, we demonstrate that NCACO-score can effectively guide fragment assembly for protein structure prediction, which has achieved a good performance in building the structure models for hard targets from CASP8 in terms of both accuracy and speed. Conclusions Although NCACO-score is developed based on a coarse-grained model, it is able to discriminate native conformers from decoy conformers with high accuracy. NCACO is a very effective scoring function for structure modeling.

  6. What Do Test Scores Really Mean? A Latent Class Analysis of Danish Test Score Performance

    DEFF Research Database (Denmark)

    Munk, Martin D.; McIntosh, James

    2014-01-01

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55, tested in 1968, and followed until 2011. The procedure takes account of unobservable effects as well as excessive zeros in the data. We show that the test scores...... of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture and possible incentive problems make it more di¢ cult to understand what the tests measure....

  7. Vinardo: A Scoring Function Based on Autodock Vina Improves Scoring, Docking, and Virtual Screening.

    Directory of Open Access Journals (Sweden)

    Rodrigo Quiroga

    Full Text Available Autodock Vina is a very popular, and highly cited, open source docking program. Here we present a scoring function which we call Vinardo (Vina RaDii Optimized. Vinardo is based on Vina, and was trained through a novel approach, on state of the art datasets. We show that the traditional approach to train empirical scoring functions, using linear regression to optimize the correlation of predicted and experimental binding affinities, does not result in a function with optimal docking capabilities. On the other hand, a combination of scoring, minimization, and re-docking on carefully curated training datasets allowed us to develop a simplified scoring function with optimum docking performance. This article provides an overview of the development of the Vinardo scoring function, highlights its differences with Vina, and compares the performance of the two scoring functions in scoring, docking and virtual screening applications. Vinardo outperforms Vina in all tests performed, for all datasets analyzed. The Vinardo scoring function is available as an option within Smina, a fork of Vina, which is freely available under the GNU Public License v2.0 from http://smina.sf.net. Precompiled binaries, source code, documentation and a tutorial for using Smina to run the Vinardo scoring function are available at the same address.

  8. Vinardo: A Scoring Function Based on Autodock Vina Improves Scoring, Docking, and Virtual Screening.

    Science.gov (United States)

    Quiroga, Rodrigo; Villarreal, Marcos A

    2016-01-01

    Autodock Vina is a very popular, and highly cited, open source docking program. Here we present a scoring function which we call Vinardo (Vina RaDii Optimized). Vinardo is based on Vina, and was trained through a novel approach, on state of the art datasets. We show that the traditional approach to train empirical scoring functions, using linear regression to optimize the correlation of predicted and experimental binding affinities, does not result in a function with optimal docking capabilities. On the other hand, a combination of scoring, minimization, and re-docking on carefully curated training datasets allowed us to develop a simplified scoring function with optimum docking performance. This article provides an overview of the development of the Vinardo scoring function, highlights its differences with Vina, and compares the performance of the two scoring functions in scoring, docking and virtual screening applications. Vinardo outperforms Vina in all tests performed, for all datasets analyzed. The Vinardo scoring function is available as an option within Smina, a fork of Vina, which is freely available under the GNU Public License v2.0 from http://smina.sf.net. Precompiled binaries, source code, documentation and a tutorial for using Smina to run the Vinardo scoring function are available at the same address.

  9. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  10. Track score processing of multiple dissimilar sensors

    OpenAIRE

    Patsikas, Dimitrios

    2007-01-01

    In this thesis, a data fusion problem when a number of different types of sensors are deployed in the vicinity of a ballistic missile launch is studied. An objective of this thesis is to calculate a scoring function for each sensor track, and the track file with the best (optimum) track score can then be used for guiding an interceptor to the threat within the boost phase. Seven active ground-based radars, two space-based passive infrared sensors and two active light detection and rangin...

  11. Assigning Numerical Scores to Linguistic Expressions

    Directory of Open Access Journals (Sweden)

    María Jesús Campión

    2017-07-01

    Full Text Available In this paper, we study different methods of scoring linguistic expressions defined on a finite set, in the search for a linear order that ranks all those possible expressions. Among them, particular attention is paid to the canonical extension, and its representability through distances in a graph plus some suitable penalization of imprecision. The relationship between this setting and the classical problems of numerical representability of orderings, as well as extension of orderings from a set to a superset is also explored. Finally, aggregation procedures of qualitative rankings and scorings are also analyzed.

  12. A lumbar disc surgery predictive score card.

    Science.gov (United States)

    Finneson, B E

    1978-06-01

    A lumbar disc surgery predictive score card or questionnaire has been developed to assess potential candidates for excision of a herniated lumbar disc who have not previously undergone lumbar spine surgery. It is not designed to encompass patients who are being considered for other types of lumbar spine surgery, such as decompressive laminectomy or fusion. In an effort to make the "score card" usable by almost all physicians who are involved in lumbar disc surgery, only studies which have broad acceptance and are generally employed are included. Studies which have less widespread use such as electromyogram, discogram, venogram, special psychologic studies (MMPI, pain drawings) have been purposely excluded.

  13. [Intraoperative crisis and surgical Apgar score].

    Science.gov (United States)

    Oshiro, Masakatsu; Sugahara, Kazuhiro

    2014-03-01

    Intraoperative crisis is an inevitable event to anesthesiologists. The crisis requires effective and coordinated management once it happened but it is difficult to manage the crises properly under extreme stressful situation. Recently, it is reported that the use of surgical crisis checklists is associated with significant improvement in the management of operating-room crises in a high-fidelity simulation study. Careful preoperative evaluation, proper intraoperative management and using intraoperative crisis checklists will be needed for safer perioperative care in the future. Postoperative complication is a serious public health problem. It reduces the quality of life of patients and raises medical cost. Careful management of surgical patients is required according to their postoperative condition for preventing postoperative complications. A 10-point surgical Apgar score, calculated from intraoperative estimated blood loss, lowest mean arterial pressure, and lowest heart rate, is a simple and available scoring system for predicting postoperative complications. It undoubtedly predicts higher than average risk of postoperative complications and death within 30 days of surgery. Surgical Apgar score is a bridge between proper intraoperative and postoperative care. Anesthesiologists should make effort to reduce the postoperative complication and this score is a tool for it.

  14. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  15. Progress scored in forest pest studies

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ Teaming up with co-workers from State Forestry Administration (SFA), researchers of the CAS Institute of Zoology (IOZ)have scored encouraging progress in their studies of pheromones-based technology against the red turpentine beetle (Dendroctonus valens LeConte).

  16. Stability of WISC-IV process scores.

    Science.gov (United States)

    Ryan, Joseph J; Umfleet, Laura Glass; Kane, Alexa

    2013-01-01

    Forty-three students were administered on two occasions approximately 11 months apart the complete Wechsler Intelligence Scale for Children-Fourth Edition, including the seven process components of Block Design No Time Bonus, Digit Span Forward (DSF), Digit Span Backward (DSB), Cancellation Random (CAR), Cancellation Structured (CAS), Longest Digit Span Forward (LDSF), and Longest Digit Span Backward (LDSB). Mean ages at first and second testing were 7.77 years (SD = 1.91) and 8.74 years (SD = 1.93), respectively. Mean Full-Scale IQ at initial testing was 111.63 (SD = 10.71). Process score stability coefficients ranged from .75 on DSF to .32 on CAS. Discrepancy score stabilities ranged from .45 on DSF minus DSB to .05 on CAS minus CAR. Approximately 21% of participants increased their LDSF on retest, and 16.3% showed a gain on LDSB. Caution must be exercised when interpreting process scores, and interpretation of discrepancy scores should probably be avoided.

  17. What do educational test scores really measure?

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55 and tested in 1968. The procedure takes account of unobservable effects as well as excessive zeros in the data. The bulk of unobservable effects are uncorrelate...

  18. The FAt Spondyloarthritis Spine Score (FASSS)

    DEFF Research Database (Denmark)

    Pedersen, Susanne Juhl; Zhao, Zheng; Lambert, Robert Gw

    2013-01-01

    Studies have shown that fat lesions follow resolution of inflammation in the spine of patients with axial spondyloarthritis (SpA). Fat lesions at vertebral corners have also been shown to predict development of new syndesmophytes. Therefore, scoring of fat lesions in the spine may constitute both...

  19. Critical Thinking: More than Test Scores

    Science.gov (United States)

    Smith, Vernon G.; Szymanski, Antonia

    2013-01-01

    This article is for practicing or aspiring school administrators. The demand for excellence in public education has lead to an emphasis on standardized test scores. This article explores the development of a professional enhancement program designed to prepare teachers to teach higher order thinking skills. Higher order thinking is the primary…

  20. Writing Plan Quality: Relevance to Writing Scores

    Science.gov (United States)

    Chai, Constance

    2006-01-01

    If writing matters, how can we improve it? This study investigated the nature of writing plan quality and its relationship to the ensuing writing scores. Data were drawn from the 1998 Provincial Learning Assessment Programme (PLAP) in Writing, which was administered to pupils in Grades 4, 7, and 10 across British Columbia, Canada. Common features…

  1. Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores: Theory and Applications

    Science.gov (United States)

    Yao, Lihua

    2012-01-01

    Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…

  2. Relationship between Students' Scores on Research Methods and Statistics, and Undergraduate Project Scores

    Science.gov (United States)

    Ossai, Peter Agbadobi Uloku

    2016-01-01

    This study examined the relationship between students' scores on Research Methods and statistics, and undergraduate project at the final year. The purpose was to find out whether students matched knowledge of research with project-writing skill. The study adopted an expost facto correlational design. Scores on Research Methods and Statistics for…

  3. Analysis of WAIS-IV Index Score Scatter Using Significant Deviation from the Mean Index Score

    Science.gov (United States)

    Gregoire, Jacques; Coalson, Diane L.; Zhu, Jianjun

    2011-01-01

    The Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) does not include verbal IQ and performance IQ scores, as provided in previous editions of the scale; rather, this edition provides comparisons among four index scores, allowing analysis of an individual's WAIS-IV performance in more discrete domains of cognitive ability. To supplement…

  4. Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores: Theory and Applications

    Science.gov (United States)

    Yao, Lihua

    2012-01-01

    Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…

  5. Multidimensional Linking for Domain Scores and Overall Scores for Nonequivalent Groups

    Science.gov (United States)

    Yao, Lihua

    2011-01-01

    The No Child Left Behind Act requires state assessments to report not only overall scores but also domain scores. To see the information on students' overall achievement, progress, and detailed strengths and weaknesses, and thereby identify areas for improvement in educational quality, students' performances across years or across forms need to be…

  6. Analysis of WAIS-IV Index Score Scatter Using Significant Deviation from the Mean Index Score

    Science.gov (United States)

    Gregoire, Jacques; Coalson, Diane L.; Zhu, Jianjun

    2011-01-01

    The Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) does not include verbal IQ and performance IQ scores, as provided in previous editions of the scale; rather, this edition provides comparisons among four index scores, allowing analysis of an individual's WAIS-IV performance in more discrete domains of cognitive ability. To supplement…

  7. Lower bounds to the reliabilities of factor score estimators

    NARCIS (Netherlands)

    Hessen, D.J.|info:eu-repo/dai/nl/256041717

    2017-01-01

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score

  8. Optimal cutting scores using a linear loss function

    NARCIS (Netherlands)

    Linden, van der Wim J.; Mellenbergh, Gideon J.

    1977-01-01

    The situation is considered in which a total score on a test is used for classifying examinees into two categories: "accepted (with scores above a cutting score on the test) and "not accepted" (with scores below the cutting score). A value on the latent variable is fixed in advance; examinees above

  9. Effects of using a scoring guide on essay scores: generalizability theory.

    Science.gov (United States)

    Kan, Adnan

    2007-12-01

    This study was conducted to test the effect of task level and item consistency when two conditions, with and without the assistance of a scoring guide, were used to score essays. The use of generalization theory was proposed as a framework for examining the effect of task variability and use of the scoring guide on achievement measures. Participants were 21 students in Grade 9 enrolled in regular Turkish language and literature classes. Of these students 11 were men and 10 were women. Ten teachers from the city were raters. In the past, raters of essays have given varied judgements of writing quality. Utilizing decision and generalizability theories, variation in scores was evaluated using a three-way (person x rater x task) analysis of variance design. The scoring guide was beneficial in reducing variability of evaluating grammar and reading comprehension but not as helpful when assessing knowledge of concepts.

  10. Distribution of Errors Reported by LOD2 LODStats Project

    NARCIS (Netherlands)

    Hoekstra, R.J.; Groth, P.T.

    DescriptionResults of discussion groups at the Linked Science Workshop 2013 held at the International Semantic Web Conference. (http://linkedscience.org/events/lisc2013/)Participants were asked to develop a matrices about how semantic web/linked data solutions can help address reproducbility/re* pro

  11. The accuracy rate of Alvarado score, ultrasonography, and ...

    African Journals Online (AJOL)

    2013-09-30

    Sep 30, 2013 ... the patients have atypical clinical and laboratory findings. In ... recorded on the study form for data collection. The Alvarado score was calculated as described in the literature.[5] The Alvarado score is a 10-point scoring system.

  12. Application of decision trees in credit scoring

    Directory of Open Access Journals (Sweden)

    Ljiljanka Kvesić

    2013-12-01

    Full Text Available Banks are particularly exposed to credit risk due to the nature of their operations. Inadequate assessment of the borrower directly causes losses. The financial crisis the global economy is still going through has clearly shown what kind of problems can arise from an inadequate credit policy. Thus, the primary task of bank managers is to minimise credit risk. Credit scoring models were developed to support managers in assessing the creditworthiness of borrowers. This paper presents the decision tree based on exhaustive CHAID algorithm as one such model. Since the application of credit scoring models has not been adequately explored in the Croatian banking theory and practice, this paper aims not only to determine the characteristics that are crucial for predicting default, but also to highlight the importance of a quantitative approach in assessing the creditworthiness of borrowers.

  13. Sleep scoring using artificial neural networks.

    Science.gov (United States)

    Ronzhina, Marina; Janoušek, Oto; Kolářová, Jana; Nováková, Marie; Honzík, Petr; Provazník, Ivo

    2012-06-01

    Rapid development of computer technologies leads to the intensive automation of many different processes traditionally performed by human experts. One of the spheres characterized by the introduction of new high intelligence technologies substituting analysis performed by humans is sleep scoring. This refers to the classification task and can be solved - next to other classification methods - by use of artificial neural networks (ANN). ANNs are parallel adaptive systems suitable for solving of non-linear problems. Using ANN for automatic sleep scoring is especially promising because of new ANN learning algorithms allowing faster classification without decreasing the performance. Both appropriate preparation of training data as well as selection of the ANN model make it possible to perform effective and correct recognizing of relevant sleep stages. Such an approach is highly topical, taking into consideration the fact that there is no automatic scorer utilizing ANN technology available at present.

  14. Shower reconstruction in TUNKA-HiSCORE

    Energy Technology Data Exchange (ETDEWEB)

    Porelli, Andrea; Wischnewski, Ralf [DESY-Zeuthen, Platanenallee 6, 15738 Zeuthen (Germany)

    2015-07-01

    The Tunka-HiSCORE detector is a non-imaging wide-angle EAS cherenkov array designed as an alternative technology for gamma-ray physics above 10 TeV and to study spectrum and composition of cosmic rays above 100 TeV. An engineering array with nine stations (HiS-9) has been deployed in October 2013 on the site of the Tunka experiment in Russia. In November 2014, 20 more HiSCORE stations have been installed, covering a total array area of 0.24 square-km. We describe the detector setup, the role of precision time measurement, and give results from the innovative WhiteRabbit time synchronization technology. Results of air shower reconstruction are presented and compared with MC simulations, for both the HiS-9 and the HiS-29 detector arrays.

  15. Right tail increasing dependence between scores

    Science.gov (United States)

    Fernández, M.; García, Jesús E.; González-López, V. A.; Romano, N.

    2017-07-01

    In this paper we investigate the behavior of the conditional probability Prob(U > u|V > v) of two records coming from students of an undergraduate course, where U is the score of calculus I, scaled in [0, 1] and V is the score of physics scaled in [0, 1], the physics subject is part of the admission test of the university. For purposes of comparison, we consider two different undergraduate courses, electrical engineering and mechanical engineering, during nine years, from 2003 to 2011. Through a Bayesian perspective we estimate Prob(U > u|V > v) year by year and course by course. We conclude that U is right tail increasing in V, in both courses and for all the years. Moreover, over these nine years, we observe different ranges of variability for the estimated probabilities of electrical engineering when compared to the estimated probabilities of mechanical engineering.

  16. Soetomo score: score model in early identification of acute haemorrhagic stroke

    Directory of Open Access Journals (Sweden)

    Moh Hasan Machfoed

    2016-06-01

    Full Text Available Aim of the study: On financial or facility constraints of brain imaging, score model is used to predict the occurrence of acute haemorrhagic stroke. Accordingly, this study attempts to develop a new score model, called Soetomo score. Material and methods: The researchers performed a cross-sectional study of 176 acute stroke patients with onset of ≤24 hours who visited emergency unit of Dr. Soetomo Hospital from July 14th to December 14th, 2014. The diagnosis of haemorrhagic stroke was confirmed by head computed tomography scan. There were seven predictors of haemorrhagic stroke which were analysed by using bivariate and multivariate analyses. Furthermore, a multiple discriminant analysis resulted in an equation of Soetomo score model. The receiver operating characteristic procedure resulted in the values of area under curve and intersection point identifying haemorrhagic stroke. Afterward, the diagnostic test value was determined. Results: The equation of Soetomo score model was (3 × loss of consciousness + (3.5 × headache + (4 × vomiting − 4.5. Area under curve value of this score was 88.5% (95% confidence interval = 83.3–93.7%. In the Soetomo score model value of ≥−0.75, the score reached the sensitivity of 82.9%, specificity of 83%, positive predictive value of 78.8%, negative predictive value of 86.5%, positive likelihood ratio of 4.88, negative likelihood ratio of 0.21, false negative of 17.1%, false positive of 17%, and accuracy of 83%. Conclusions: The Soetomo score model value of ≥−0.75 can identify acute haemorrhagic stroke properly on the financial or facility constrains of brain imaging.

  17. Malnutrition-Inflammation Score in Hemodialysis Patients

    OpenAIRE

    Behrooz Ebrahimzadehkor; Atamohammad Dorri; Abdolhamed Yapan-Gharavi

    2014-01-01

    Background: Malnutrition is a prevalent complication in patients on maintenance hemodialysis. Malnutrition-inflammation score (MIS), comprehensive nutritional assessment tool, as the reference standard was used to examine protein-energy wasting (PEW) and inflammation in hemodialysis patients. Materials and Methods: In this descriptive- analytical study, 48 hemodialysis patients were selected with random sampling. All the patients were interviewed and the MIS of the patients was recorded. T...

  18. North Korean refugee doctors' preliminary examination scores

    Directory of Open Access Journals (Sweden)

    Sung Uk Chae

    2016-12-01

    Full Text Available Purpose Although there have been studies emphasizing the re-education of North Korean (NK doctors for post-unification of the Korean Peninsula, study on the content and scope of such re-education has yet to be conducted. Researchers intended to set the content and scope of re-education by a comparative analysis for the scores of the preliminary examination, which is comparable to the Korean Medical Licensing Examination (KMLE. Methods The scores of the first and second preliminary exams were analyzed by subject using the Wilcoxon signed rank test. The passing status of the group of NK doctors for KMLE in recent 3 years were investigated. The multiple-choice-question (MCQ items of which difficulty indexes of NK doctors were lower than those of South Korean (SK medical students by two times of the standard deviation of the scores of SK medical students were selected to investigate the relevant reasons. Results The average scores of nearly all subjects were improved in the second exam compared with the first exam. The passing rate of the group of NK doctors was 75%. The number of MCQ items of which difficulty indexes of NK doctors were lower than those of SK medical students was 51 (6.38%. NK doctors’ lack of understandings for Diagnostic Techniques and Procedures, Therapeutics, Prenatal Care, and Managed Care Programs was suggested as the possible reason. Conclusion The education of integrated courses focusing on Diagnostic Techniques and Procedures and Therapeutics, and apprenticeship-style training for clinical practice of core subjects are needed. Special lectures on the Preventive Medicine are likely to be required also.

  19. MODELING CREDIT RISK THROUGH CREDIT SCORING

    OpenAIRE

    Adrian Cantemir CALIN; Oana Cristina POPOVICI

    2014-01-01

    Credit risk governs all financial transactions and it is defined as the risk of suffering a loss due to certain shifts in the credit quality of a counterpart. Credit risk literature gravitates around two main modeling approaches: the structural approach and the reduced form approach. In addition to these perspectives, credit risk assessment has been conducted through a series of techniques such as credit scoring models, which form the traditional approach. This paper examines the evolution of...

  20. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  1. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  2. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  3. Scoring ordinal variables for constructing composite indicators

    Directory of Open Access Journals (Sweden)

    Marica Manisera

    2013-05-01

    Full Text Available In order to provide composite indicators of latent variables, for example of customer satisfaction, it is opportune to identify the structure of the latent variable, in terms of the assignment of items to the subscales defining the latent variable. Adopting the reflective model, the impact of four different methods of scoring ordinal variables on the identification of the true structure of latent variables is investigated. A simulation study composed of 5 steps is conducted: (1 simulation of population data with continuous variables measuring a two-dimensional latent variable with known structure; (2 draw of a number of random samples; (3 discretization of the continuous variables according to different distributional forms; (4 quantification of the ordinal variables obtained in step (3 according to different methods; (5 construction of composite indicators and verification of the correct assignment of variables to subscales by the multiple group method and the factor analysis. Results show that the considered scoring methods have similar performances in assigning items to subscales, and that, when the latent variable is multinormal, the distributional form of the observed ordinal variables is not determinant in suggesting the best scoring method to use.

  4. Quality scores for 32,000 genomes

    DEFF Research Database (Denmark)

    Land, Miriam L.; Hyatt, Doug; Jun, Se-Ran;

    2014-01-01

    public databases, and assigned quality scores for more than 30,000 prokaryotic genome sequences. Results Scores were assigned using four categories: the completeness of the assembly, the presence of full-length rRNA genes, tRNA composition and the presence of a set of 102 conserved genes in prokaryotes...... or not applicable. The scores highlighted organisms for which commonly used tools do not perform well. This information can be used to improve tools and to serve a broad group of users as more diverse organisms are sequenced. Unexpectedly, the comparison of predicted tRNAs across 15,000 high quality genomes showed......Background More than 80% of the microbial genomes in GenBank are of ‘draft’ quality (12,553 draft vs. 2,679 finished, as of October, 2013). We have examined all the microbial DNA sequences available for complete, draft, and Sequence Read Archive genomes in GenBank as well as three other major...

  5. Validation of a new scoring system: Rapid assessment faecal incontinence score

    Institute of Scientific and Technical Information of China (English)

    Fernando; de; la; Portilla; Arantxa; Calero-Lillo; Rosa; M; Jiménez-Rodríguez; Maria; L; Reyes; Manuela; Segovia-González; María; Victoria; Maestre; Ana; M; García-Cabrera

    2015-01-01

    AIM: To implement a quick and simple test- rapid assessment faecal incontinence score(RAFIS) and show its reliability and validity.METHODS: From March 2008 through March 2010, we evaluated a total of 261 consecutive patients, including 53 patients with faecal incontinence. Demographic and comorbidity information was collected. In a single visit, patients were administered the RAFIS. The results obtained with the new score were compared with those of both Wexner score and faecal incontinence quality of life scale(FIQL) questionnaire. The patient withoutinfluence of the surgeon completed the test. The role of surgeon was explaining the meaning of each section and how he had to fill. Reliability of the RAFIS score was measured using intra-observer agreement and Cronbach’s alpha(internal consistency) coefficient. Multivariate analysis of the main components within the different scores was performed in order to determine whether all the scores measured the same factor and to conclude whether the information could be encompassed in a single factor. A sample size of 50 patients with faecal incontinence was estimated to be enough to detect a correlation of 0.55 or better at 5% level of significance with 80% power.RESULTS: We analysed the results obtained by 53 consecutive patients with faecal incontinence(median age 61.55 ± 12.49 years) in the three scoring systems. A total of 208 healthy volunteers(median age 58.41 ± 18.41 years) without faecal incontinence were included in the study as negative controls. Pearson’s correlation coefficient between "state" and "leaks" was excellent(r = 0.92, P < 0.005). Internal consistency in the comparison of "state" and "leaks" yielded also excellent correlation(Cronbach’s α = 0.93). Results in each score were compared using regression analysis and a correlation value of r = 0.98 was obtained with Wexner score. As regards FIQL questionnaire, the values of "r " for the different subscales of the questionnaire were: "lifestyle" r

  6. Empirical Bayes Estimates of Domain Scores under Binomial and Hypergeometric Distributions for Test Scores.

    Science.gov (United States)

    Lin, Miao-Hsiang; Hsiung, Chao A.

    1994-01-01

    Two simple empirical approximate Bayes estimators are introduced for estimating domain scores under binomial and hypergeometric distributions respectively. Criteria are established regarding use of these functions over maximum likelihood estimation counterparts. (SLD)

  7. A Comparison of Sleep Scored from Electroencephalography to Sleep Scored by Wrist Actigraphy

    Science.gov (United States)

    1993-09-01

    actigraphy in insomnia. S . 15(4): 293-301. Kripke, D. F., Mullaney, D. J., Messin, S., and Wyborney, V. G. 1978. Wrist actigraphic measures of sleep and...Cl•anificatiort) (U) A Comparison of Sleep Scored from Electroencephalography to Sleep Scored by Wrist Actigraphy 12. PERSONAL AUTHOR(S) J.L. Caldwell...how much rest soldiers receive, various methods of monitoring activity have been used. One unobtrusive method is to use wrist activity monitors

  8. MELD-XI Scores Correlate with Post-Fontan Hepatic Biopsy Fibrosis Scores.

    Science.gov (United States)

    Evans, William N; Acherman, Ruben J; Ciccolo, Michael L; Carrillo, Sergio A; Galindo, Alvaro; Rothman, Abraham; Winn, Brody J; Yumiaco, Noel S; Restrepo, Humberto

    2016-10-01

    We tested the hypothesis that MELD-XI values correlated with hepatic total fibrosis scores obtained in 70 predominately stable, post-Fontan patients that underwent elective cardiac catheterization. We found a statistically significant correlation between MELD-XI values and total fibrosis scores (p = 0.003). Thus, serial MELD-XI values may be an additional useful clinical parameter for follow-up care in post-Fontan patients.

  9. Scoring function to predict solubility mutagenesis

    Directory of Open Access Journals (Sweden)

    Deutsch Christopher

    2010-10-01

    Full Text Available Abstract Background Mutagenesis is commonly used to engineer proteins with desirable properties not present in the wild type (WT protein, such as increased or decreased stability, reactivity, or solubility. Experimentalists often have to choose a small subset of mutations from a large number of candidates to obtain the desired change, and computational techniques are invaluable to make the choices. While several such methods have been proposed to predict stability and reactivity mutagenesis, solubility has not received much attention. Results We use concepts from computational geometry to define a three body scoring function that predicts the change in protein solubility due to mutations. The scoring function captures both sequence and structure information. By exploring the literature, we have assembled a substantial database of 137 single- and multiple-point solubility mutations. Our database is the largest such collection with structural information known so far. We optimize the scoring function using linear programming (LP methods to derive its weights based on training. Starting with default values of 1, we find weights in the range [0,2] so that predictions of increase or decrease in solubility are optimized. We compare the LP method to the standard machine learning techniques of support vector machines (SVM and the Lasso. Using statistics for leave-one-out (LOO, 10-fold, and 3-fold cross validations (CV for training and prediction, we demonstrate that the LP method performs the best overall. For the LOOCV, the LP method has an overall accuracy of 81%. Availability Executables of programs, tables of weights, and datasets of mutants are available from the following web page: http://www.wsu.edu/~kbala/OptSolMut.html.

  10. Best waveform score for diagnosing keratoconus

    Directory of Open Access Journals (Sweden)

    Allan Luz

    2013-12-01

    Full Text Available PURPOSE: To test whether corneal hysteresis (CH and corneal resistance factor (CRF can discriminate between keratoconus and normal eyes and to evaluate whether the averages of two consecutive measurements perform differently from the one with the best waveform score (WS for diagnosing keratoconus. METHODS: ORA measurements for one eye per individual were selected randomly from 53 normal patients and from 27 patients with keratoconus. Two groups were considered the average (CH-Avg, CRF-Avg and best waveform score (CH-WS, CRF-WS groups. The Mann-Whitney U-test was used to evaluate whether the variables had similar distributions in the Normal and Keratoconus groups. Receiver operating characteristics (ROC curves were calculated for each parameter to assess the efficacy for diagnosing keratoconus and the same obtained for each variable were compared pairwise using the Hanley-McNeil test. RESULTS: The CH-Avg, CRF-Avg, CH-WS and CRF-WS differed significantly between the normal and keratoconus groups (p<0.001. The areas under the ROC curve (AUROC for CH-Avg, CRF-Avg, CH-WS, and CRF-WS were 0.824, 0.873, 0.891, and 0.931, respectively. CH-WS and CRF-WS had significantly better AUROCs than CH-Avg and CRF-Avg, respectively (p=0.001 and 0.002. CONCLUSION: The analysis of the biomechanical properties of the cornea through the ORA method has proved to be an important aid in the diagnosis of keratoconus, regardless of the method used. The best waveform score (WS measurements were superior to the average of consecutive ORA measurements for diagnosing keratoconus.

  11. Consider Propensity Scores to Compare Treatments

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2006-11-01

    Full Text Available The underlying question when comparing treatments is usually whether an individual would do better with treatment X than they would with treatment Y. But there are often practical and theoretical problems in giving people both treatments and comparing the data. This paper presents the use of propensity score matching as a methodology that can be used to compare the effectiveness of different treatments. The method is applied to answer two questions: (1 - Should examinees take a college admissions test near or a few years after graduation?- and (2 - Do accommodated students receive an unfair advantage?- Data from a large admission testing program is used.

  12. Evaluation of Stress Scores Throughout Radiological Biopsies

    Directory of Open Access Journals (Sweden)

    Turkoglu

    2016-06-01

    Full Text Available Background Ultrasound-guided biopsy procedures are the most prominent methods that increase the trauma, stress and anxiety experienced by the patients. Objectives Our goal was to examine the level of stress in patients waiting for radiologic biopsy procedures and determine the stress and anxiety level arising from waiting for a biopsy procedure. Patients and Methods This prospective study included 35 female and 65 male patients who were admitted to the interventional radiology department of Kartal Dr. Lütfi Kirdar training and research hospital, Istanbul between the years 2014 and 2015. They filled out the adult resilience scale consisting of 33 items. Patients who were undergoing invasive radiologic interventions were grouped according to their phenotypic characteristics, education level (low, intermediate, and high, and biopsy features (including biopsy localization: neck, thorax, abdomen, and bone; and the number of procedures performed, 1 or more than 1. Before the biopsy, they were also asked to complete the depression-anxiety-stress scale (DASS 42, state-trait anxiety inventory scale (STAI-I, and continuous anxiety scale STAI-II. A total of 80 patients were biopsied (20 thyroid and parathyroid, 20 thorax, 20 liver and kidney, and 20 bone biopsies. The association between education levels (primary- secondary, high school and postgraduate and the number of biopsies (1 and more than 1 with the level of anxiety and stress were evaluated using the above-mentioned scales. Results Evaluation of sociodemographic and statistical characteristics of the patients showed that patients with biopsy in the neck region were moderately and severely depressed and stressed. In addition, the ratio of severe and extremely severe anxiety scores was significantly high. While the STAI-I and II scores were lined up as neck > bone > thorax > abdomen, STAI-I was higher in neck biopsies compared to thorax and abdomen biopsies. Regarding STAI-I and II scales, patients

  13. Fingerprint Recognition Using Minutia Score Matching

    CERN Document Server

    J, Ravi; R, Venugopal K

    2010-01-01

    The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person's life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm.

  14. Validating the Interpretations and Uses of Test Scores

    Science.gov (United States)

    Kane, Michael T.

    2013-01-01

    To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…

  15. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  16. 24 CFR 902.45 - Management operations scoring and thresholds.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Management operations scoring and... URBAN DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Indicator #3: Management Operations § 902.45 Management operations scoring and thresholds. (a) Scoring. The Management Operations Indicator score...

  17. Physics First: Impact on SAT Math Scores

    Science.gov (United States)

    Bouma, Craig E.

    Improving science, technology, engineering, and mathematics (STEM) education has become a national priority and the call to modernize secondary science has been heard. A Physics First (PF) program with the curriculum sequence of physics, chemistry, and biology (PCB) driven by inquiry- and project-based learning offers a viable alternative to the traditional curricular sequence (BCP) and methods of teaching, but requires more empirical evidence. This study determined impact of a PF program (PF-PCB) on math achievement (SAT math scores) after the first two cohorts of students completed the PF-PCB program at Matteo Ricci High School (MRHS) and provided more quantitative data to inform the PF debate and advance secondary science education. Statistical analysis (ANCOVA) determined the influence of covariates and revealed that PF-PCB program had a significant (p < .05) impact on SAT math scores in the second cohort at MRHS. Statistically adjusted, the SAT math means for PF students were 21.4 points higher than their non-PF counterparts when controlling for prior math achievement (HSTP math), socioeconomic status (SES), and ethnicity/race.

  18. Literature in focus: How to Score

    CERN Multimedia

    2006-01-01

    What is the perfect way to take a free kick? Which players are under more stress: attackers, midfielders or defenders? How do we know when a ball has crossed the goal-line? And how can teams win a penalty shoot out? From international team formations to the psychology of the pitch and the changing room... The World Cup might be a time to forget about physics for a while, but not for Ken Bray, a theoretical physicist and visiting Fellow of the Sport and Exercise Science Group at the University of Bath who specializes in the science of football. Dr Bray will visit CERN to talk exclusively about his book: How to Score. As a well-seasoned speaker and advisor to professional football teams, this presentation promises to be a fascinating and timely insight into the secret science that lies behind 'the beautiful game'. If you play or just watch football, don't miss this event! Ken Bray - How to Score Thursday 22 June at 3 p.m. (earlier than usual to avoid clashes with World Cup matches!) Central Library reading ...

  19. [Validation of a diagnostic scoring system (Ohmann score) in acute appendicitis].

    Science.gov (United States)

    Zielke, A; Sitter, H; Rampp, T A; Schäfer, E; Hasse, C; Lorenz, W; Rothmund, M

    1999-07-01

    A diagnostic scoring system, recently published by Ohmann et al. in this journal, was validated by analyzing the clinicopathological data of a consecutive series of 2,359 patients, admitted for suspicion of acute appendicitis. The results of the scoring system were compared to the results of clinical evaluation by junior (provisional) and senior surgeons (final clinical diagnosis). To assess the diagnostic ability of the score, the accuracy and positive predictive value were defined as the major diagnostic performance parameters; the rate of theoretical negative laparotomies and that of diagnostic errors served as the major procedural performance parameters. Of 2,359 patients admitted for suspected acute appendicitis, 662 were proven to have acute appendicitis by histology, for a prevalence of 28%. The overall sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of the provisional clinical diagnosis were 0.50, 0.94, 0.77, 0.83, and 0.82; 0.93, for the score 0.63, 0.93, 0.77, 0.86 and 0.84, and for the final clinical diagnosis 0.90, 0.94, 0.85, 0.96, and 0.93, respectively. Of the main diagnostic performance parameter, the accuracy of the score was significantly better than that of provisional clinical diagnosis (P apendicitis, the score demonstrated a superior performance, with only 6 cases missed (0.9%). However, the number of patients with acute appendicitis, including those with perforated disease, who were not identified by the score, was almost four times that of the final clinical diagnosis (245 vs 63). With regard to the main procedural performance parameter, the score resulted in a significantly smaller number of diagnostic errors than the provisional clinical investigator (P < 0.05, chi 2 test). The results of this study indicate that the diagnostic scoring system might be helpful when experienced investigators or additional diagnostic modalities such as ultrasonography are not available. It may therefore be of value

  20. Comparison of New Ballards score and Parkins score for gestational age estimation.

    Science.gov (United States)

    Sreekumar, Kavita; d'Lima, Annely; Nesargi, Saudamini; Rao, Suman; Bhat, Swarnarekha

    2013-08-01

    This prospective analytical study was done to compare the accuracy of New Ballards score (NBS) and Parkins score (PS) in assessing the gestational age (GA) in newborns. The GA of 284 babies was assessed by the NBS and PS within 24 hours of birth. The two methods of assessment were compared using the Bland Altmann Plot. The mean difference between the two measurements was 1.530576. 95% of the values lay within the limits of agreement which are -1.82982 and 4.890974. The two methods are found to be in acceptable agreement. Parkins score enables us to easily assess the gestational age of babies within ±12 days, especially in sick and preterm babies.

  1. Field trials of the Baby Check score card: mothers scoring their babies at home.

    Science.gov (United States)

    Thornton, A J; Morley, C J; Green, S J; Cole, T J; Walker, K A; Bonnett, J M

    1991-01-01

    The Baby Check score card has been developed to help parents and health professionals grade the severity of acute illness in babies. This paper reports the results of two field trials in which mothers used Baby Check at home, 104 mothers scoring their babies daily for a week and 56 using it for six months. They all found Baby Check easy to use, between 68% and 81% found it useful, and 96% would recommended it to others. Over 70% of those using it daily used it very competently. Those using it infrequently did less well, suggesting that familiarity with the assessment is important. The scores obtained show that Baby Check's use would not increase the number of mothers seeking medical advice. With introduction and practice most mothers should be able to use Baby Check effectively. It should help them assess their babies' illnesses and make appropriate decisions about seeking medical advice.

  2. Analysing H(z) data using two-point diagnostics

    Science.gov (United States)

    Leaf, Kyle; Melia, Fulvio

    2017-09-01

    Measurements of the Hubble constant H(z) are increasingly being used to test the expansion rate predicted by various cosmological models. But the recent application of 2-point diagnostics, such as Om(z_i,z_j) and Omh^2(z_i,z_j), has produced considerable tension between LCDM's predictions and several observations, with other models faring even worse. Part of this problem is attributable to the continued mixing of truly model-independent measurements using the cosmic-chronomter approach, and model-dependent data extracted from BAOs. In this paper, we advance the use of 2-point diagnostics beyond their current status, and introduce new variations, which we call Delta h(z_i,z_j), that are more useful for model comparisons. But we restrict our analysis exclusively to cosmic-chronometer data, which are truly model independent. Even for these measurements, however, we confirm the conclusions drawn by earlier workers that the data have strongly non-Gaussian uncertainties, requiring the use of both "median" and "mean" statistical approaches. Our results reveal that previous analyses using 2-point diagnostics greatly underestimated the errors, thereby misinterpreting the level of tension between theoretical predictions and H(z) data. Instead, we demonstrate that as of today, only Einstein-de Sitter is ruled out by the 2-point diagnostics at a level of significance exceeding ~ 3 sigma. The R_h=ct universe is slightly favoured over the remaining models, including LCDM and Chevalier-Polarski-Linder, though all of them (other than Einstein-de Sitter) are consistent to within 1 sigma with the measured mean of the Delta h(z_i,z_j) diagnostics.

  3. THE CHERNOBYL ACCIDENT AND HEALTH (TWO POINTS OF VIEW

    Directory of Open Access Journals (Sweden)

    V. M. Shubik

    2011-01-01

    Full Text Available The article presents two alternative points of view on the relationship of health malfunctions after the Chernobyl accident with radiation effect or with the factors of non-radiation nature (social, stress, nutrition peculiarities, etc.. An analysis of literature data and results of author’s own research of radiosensitive indicators of immunity condition, having essential value for the immediate and long term consequences of radiation effect was done. Possible correlation between health malfunctions of the population living in the regions, contaminated by the radionuclides, and combined effect of radiation and factors of non-radiation nature is shown.

  4. Prediction of true test scores from observed item scores and ancillary data.

    Science.gov (United States)

    Haberman, Shelby J; Yao, Lili; Sinharay, Sandip

    2015-05-01

    In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability.

  5. Development of the Crohn's disease digestive damage score, the Lémann score

    DEFF Research Database (Denmark)

    Pariente, Benjamin; Cosnes, Jacques; Danese, Silvio

    2011-01-01

    Crohn's disease (CD) is a chronic progressive destructive disease. Currently available instruments measure disease activity at a specific point in time. An instrument to measure cumulative structural damage to the bowel, which may predict long-term disability, is needed. The aim of this article...... is to outline the methods to develop an instrument that can measure cumulative bowel damage. The project is being conducted by the International Program to develop New Indexes in Crohn's disease (IPNIC) group. This instrument, called the Crohn's Disease Digestive Damage Score (the Lémann score), should take...

  6. Gait Deviation Index, Gait Profile Score and Gait Variable Score in children with spastic cerebral palsy

    DEFF Research Database (Denmark)

    Rasmussen, Helle Mätzke; Nielsen, Dennis Brandborg; Pedersen, Niels Wisbech;

    2015-01-01

    Abstract The Gait Deviation Index (GDI) and Gait Profile Score (GPS) are the most used summary measures of gait in children with cerebral palsy (CP). However, the reliability and agreement of these indices have not been investigated, limiting their clinimetric quality for research and clinical...... practice. The aim of this study was to investigate the intra-rater reliability and agreement of summary measures of gait (GDI; GPS; and the Gait Variable Score (GVS) derived from the GPS). The intra-rater reliability and agreement were investigated across two repeated sessions in 18 children aged 5...

  7. Do efficiency scores depend on input mix?

    DEFF Research Database (Denmark)

    Asmild, Mette; Hougaard, Jens Leth; Kronborg, Dorte

    2013-01-01

    In this paper we examine the possibility of using the standard Kruskal-Wallis (KW) rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix of the observations. Since the DEA frontier...... is estimated, many standard assumptions for evaluating the KW test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, unlike existing approaches...... the assumption of mix independence is rejected the implication is that it, for example, is impossible to determine whether machine intensive project are more or less efficient than labor intensive projects....

  8. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

    Directory of Open Access Journals (Sweden)

    Pablo Rogers

    2015-01-01

    Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

  9. Tools & techniques--statistics: propensity score techniques.

    Science.gov (United States)

    da Costa, Bruno R; Gahl, Brigitta; Jüni, Peter

    2014-10-01

    Propensity score (PS) techniques are useful if the number of potential confounding pretreatment variables is large and the number of analysed outcome events is rather small so that conventional multivariable adjustment is hardly feasible. Only pretreatment characteristics should be chosen to derive PS, and only when they are probably associated with outcome. A careful visual inspection of PS will help to identify areas of no or minimal overlap, which suggests residual confounding, and trimming of the data according to the distribution of PS will help to minimise residual confounding. Standardised differences in pretreatment characteristics provide a useful check of the success of the PS technique employed. As with conventional multivariable adjustment, PS techniques cannot account for confounding variables that are not or are only imperfectly measured, and no PS technique is a substitute for an adequately designed randomised trial.

  10. Reproducibility of scoring emphysema by HRCT

    Energy Technology Data Exchange (ETDEWEB)

    Malinen, A.; Partanen, K.; Rytkoenen, H.; Vanninen, R. [Kuopio Univ. Hospital (Finland). Dept. of Clinical Radiology; Erkinjuntti-Pekkanen, R. [Kuopio Univ. Hospital (Finland). Dept. of Pulmonary Diseases

    2002-04-01

    Purpose: We evaluated the reproducibility of three visual scoring methods of emphysema and compared these methods with pulmonary function tests (VC, DLCO, FEV1 and FEV%) among farmer's lung patients and farmers. Material and Methods: Three radiologists examined high-resolution CT images of farmer's lung patients and their matched controls (n=70) for chronic interstitial lung diseases. Intraobserver reproducibility and interobserver variability were assessed for three methods: severity, Sanders' (extent) and Sakai. Pulmonary function tests as spirometry and diffusing capacity were measured. Results: Intraobserver -values for all three methods were good (0.51-0.74). Interobserver varied from 0.35 to 0.72. The Sanders' and the severity methods correlated strongly with pulmonary function tests, especially DLCO and FEV1. Conclusion: The Sanders' method proved to be reliable in evaluating emphysema, in terms of good consistency of interpretation and good correlation with pulmonary function tests.

  11. Vertebral heart scores in eight dog breeds.

    Science.gov (United States)

    Jepsen-Grant, K; Pollard, R E; Johnson, L R

    2013-01-01

    The vertebral heart score (VHS) measurement is commonly used to provide a more objective measurement of cardiomegaly in canines. However, several studies have shown significant breed variations from the value previously established by Buchanan and Bücheler (9.7 ± 0.5). This study describes VHS measurements in Pug, Pomeranian, Yorkshire Terrier, Dachshund, Bulldog, Shih Tzu, Lhasa Apso, and Boston Terrier dog breeds. Dogs with two or three view thoracic radiographs, no subjective radiographic evidence of cardiomegaly, and no physical examination findings of heart murmurs or gallop rhythms were included in the study. The Pug, Pomeranian, Bulldog, and Boston Terrier groups were found to have a VHS significantly greater than 9.7 ± 0.5 (P Bulldog (P = 0.028) and Boston Terrier (P = 0.0004) groups. Thoracic depth to width ratio did not have a significant effect on VHS.

  12. Proportional Distribution of Patient Satisfaction Scores by Clinical Service

    Directory of Open Access Journals (Sweden)

    Michael S Leonard MD, MS

    2015-11-01

    Full Text Available The Proportional Responsibility for Integrated Metrics by Encounter (PRIME model is a novel means of allocating patient experience scores based on the proportion of each physician's involvement in care. Secondary analysis was performed on Hospital Consumer Assessment of Healthcare Providers and Systems surveys from a tertiary care academic institution. The PRIME model was used to calculate specialty-level scores based on encounters during a hospitalization. Standard and PRIME scores for services with the most inpatient encounters were calculated. Hospital medicine had the most discharges and encounters. The standard model generated a score of 74.6, while the PRIME model yielded a score of 74.9. The standard model could not generate a score for anesthesiology due to the lack of returned surveys, but the PRIME model yielded a score of 84.2. The PRIME model provides a more equitable method for distributing satisfaction scores and can generate scores for specialties that the standard model cannot.

  13. Propensity score matching: A conceptual review for radiology researchers

    Energy Technology Data Exchange (ETDEWEB)

    Baek, Seung Hee; Park, Seong Ho; Park, Yu Rang; Kim, Hwa Jung [Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Won, Eugene [Dept. of Radiology, NYU Langone Medical Center, New York (Korea, Republic of)

    2015-04-15

    The propensity score is defined as the probability of each individual study subject being assigned to a group of interest for comparison purposes. Propensity score adjustment is a method of ensuring an even distribution of confounders between groups, thereby increasing between group comparability. Propensity score analysis is therefore an increasingly applied statistical method in observational studies. The purpose of this article was to provide a step-by-step nonmathematical conceptual guide to propensity score analysis with particular emphasis on propensity score matching. A software program code used for propensity score matching was also presented.

  14. Rates of computational errors for scoring the SIRS primary scales.

    Science.gov (United States)

    Tyner, Elizabeth A; Frederick, Richard I

    2013-12-01

    We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS. (c) 2013 APA, all rights reserved.

  15. ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores

    Science.gov (United States)

    Allalouf, Avi

    2014-01-01

    The Quality Control (QC) Guidelines are intended to increase the efficiency, precision, and accuracy of the scoring, analysis, and reporting process of testing. The QC Guidelines focus on large-scale testing operations where multiple forms of tests are created for use on set dates. However, they may also be used for a wide variety of other testing…

  16. ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores

    Science.gov (United States)

    Allalouf, Avi

    2014-01-01

    The Quality Control (QC) Guidelines are intended to increase the efficiency, precision, and accuracy of the scoring, analysis, and reporting process of testing. The QC Guidelines focus on large-scale testing operations where multiple forms of tests are created for use on set dates. However, they may also be used for a wide variety of other testing…

  17. Symptom scoring systems to diagnose distal polyneuropathy in diabetes : the Diabetic Neuropathy Symptom score

    NARCIS (Netherlands)

    Meijer, J.W.G.; Smit, A.J.; van Sonderen, E.; Groothoff, J.W.; Eisma, W.H.; Links, T.P.

    2002-01-01

    AIMS: To provide one of the diagnostic categories for distal diabetic polyneuro-pathy,several symptom scoring systems are available, which are often extensive andlack in validation. We validated a new four-item Diabetic Neuropathy Symptom (DNS) scorefor diagnosing distal diabetic polyneuropathy. MET

  18. Braden Scale cumulative score versus subscale scores: are we missing opportunities for pressure ulcer prevention?

    Science.gov (United States)

    Gadd, Molly M

    2014-01-01

    Hospital-acquired pressure ulcer incidence rates continue to rise in the United States in the acute care setting despite efforts to extinguish them, and pressure ulcers are a nursing-sensitive quality indicator. The Braden Scale for Predicting Pressure Sore Risk instrument has been shown to be a valid and reliable instrument for assessing pressure ulcer risk. This case study represented 1 patient out of a chart audit that reviewed 20 patients with confirmed hospital-acquired pressure ulcers. The goal of the audit was to determine whether these ulcers might be avoided if preventive interventions based on Braden subscale scores versus the cumulative score were implemented. This case study describes a patient who, deemed at low risk for pressure ulcer development based on cumulative Braden Scale, may have benefited from interventions based on the subscale scores of sensory perception, activity, and mobility. Further research is needed to determine whether interventions based on subscales may be effective for preventing pressure ulcers when compared to a protocol based exclusively on the cumulative score.

  19. Discrepancy between coronary artery calcium score and HeartScore in middle-aged Danes

    DEFF Research Database (Denmark)

    Diederichsen, Axel Cosmus Pyndt; Sand, Niels Peter; Nørgaard, Bjarne;

    2012-01-01

    Background: Coronary artery calcification (CAC) is an independent and incremental risk marker. This marker has previously not been compared to the HeartScore risk model. Design: A random sample of 1825 citizens (men and women, 50 or 60 years of age) was invited for screening. Methods: Using...

  20. Proportion and factors associated with low fifth minute Apgar score ...

    African Journals Online (AJOL)

    Proportion and factors associated with low fifth minute Apgar score among ... with low Apgar scores are at an increased risk of perinatal morbidity and mortality. ... of meconium stained liquor, induced/ augmented labor and low birth weight.

  1. Field trials of the Baby Check score card in hospital.

    Science.gov (United States)

    Thornton, A J; Morley, C J; Cole, T J; Green, S J; Walker, K A; Rennie, J M

    1991-01-01

    The Baby Check score card was used by junior paediatric doctors to assess 262 babies under 6 months old presenting to hospital. The duty registrar and two consultants independently graded the severity of each baby's illness without knowledge of the Baby Check score. The registrars assessed the babies at presentation while the consultants reviewed the notes. The consultants and registrars agreed about the need for hospital admission only about 75% of the time. The score's sensitivity and predictive values were similar to those of the registrars' grading. The score's specificity was 87%. Babies with serious diagnosis scored high, while minor illnesses scored low. The predictive value for requiring hospital admission increased with the score, rising to 100% for scores of 20 or more. The appropriate use of Baby Check should improve the detection of serious illness. It could also reduce the number of babies admitted with minor illness, without putting them at increased risk.

  2. Mangled extremity severity score in children.

    Science.gov (United States)

    Fagelman, Mitchell F; Epps, Howard R; Rang, Mercer

    2002-01-01

    Treatment of the severely traumatized or mangled lower extremity poses significant challenges. The Mangled Extremity Severity Score (MESS) is a scale that uses objective criteria to assist with acute management decisions. Most research on the MESS has been in adults or combined series with few children. The study was performed to investigate the MESS in children exclusively. The MESS was applied retrospectively to 36 patients with grades IIIB and IIIC open lower extremity fractures collected from two level 1 pediatric trauma centers. Patients were divided into limb salvage and primary amputation groups based on the decision of the treating surgeon. In the salvage group there were 18 grade IIIB fractures and 10 grade IIIC fractures. The MESS prediction was accurate in 93% of the injured limbs. In the amputation group eight limbs met the inclusion criteria; the MESS agreed with the treating surgeon in 63% of cases. These findings suggest the MESS should be considered when managing a child with severe lower extremity trauma.

  3. Essays on probability elicitation scoring rules

    Science.gov (United States)

    Firmino, Paulo Renato A.; dos Santos Neto, Ademir B.

    2012-10-01

    In probability elicitation exercises it has been usual to considerer scoring rules (SRs) to measure the performance of experts when inferring about a given unknown, Θ, for which the true value, θ*, is (or will shortly be) known to the experimenter. Mathematically, SRs quantify the discrepancy between f(θ) (the distribution reflecting the expert's uncertainty about Θ) and d(θ), a zero-one indicator function of the observation θ*. Thus, a remarkable characteristic of SRs is to contrast expert's beliefs with the observation θ*. The present work aims at extending SRs concepts and formulas for the cases where Θ is aleatory, highlighting advantages of goodness-of-fit and entropy-like measures. Conceptually, it is argued that besides of evaluating the personal performance of the expert, SRs may also play a role when comparing the elicitation processes adopted to obtain f(θ). Mathematically, it is proposed to replace d(θ) by g(θ), the distribution that model the randomness of Θ, and do also considerer goodness-of-fit and entropylike metrics, leading to SRs that measure the adherence of f(θ) to g(θ). The implications of this alternative perspective are discussed and illustrated by means of case studies based on the simulation of controlled experiments. The usefulness of the proposed approach for evaluating the performance of experts and elicitation processes is investigated.

  4. Fast network community detection by SCORE

    CERN Document Server

    Jin, Jiashun

    2012-01-01

    Consider a network where the nodes split into K different communities. The community labels for the nodes are unknown and it is of major interest to estimate them (i.e., community detection). Degree Corrected Block Model (DCBM) is a popular network model. How to detect communities with the DCBM is an interesting problem, where the main challenge lies in the degree heterogeneity. We propose a new approach to community detection which we call the Spectral Clustering On Ratios-of-Eigenvectors (SCORE). Compared to classical spectral methods, the main innovation is to use the entry-wise ratios between the first leading eigenvector and each of the other leading eigenvectors for clustering. The central surprise is, the effect of degree heterogeneity is largely ancillary, and can be effectively removed by taking entry-wise ratios between the leading eigenvectors. The method is successfully applied to the web blogs data and the karate club data, with error rates of 58/1222 and 1/34, respectively. These results are muc...

  5. Gambling scores in earthquake prediction analysis

    CERN Document Server

    Molchan, G

    2010-01-01

    The number of successes 'n' and the normalized measure of space-time alarm 'tau' are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. To evaluate better the forecaster's skill, it has been recently suggested to use a new characteristic, the gambling score R, which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand the class of R-characteristics and apply these to the analysis of results of the M8 prediction algorithm. We show that the level of significance 'alfa' strongly depends (1) on the choice of weighting alarm parameters, (2) on the partitioning of the entire alarm volume into component parts, and (3) on the accuracy of the spatial rate of target events, m(dg). These tools are at the disposal of the researcher and can affect the significance estimate in either direction. All the R-statistics discussed here corroborate that the prediction of 8.0<=M<8.5 events by...

  6. Genome-wide linkage analysis of severe, early-onset chronic obstructive pulmonary disease: airflow obstruction and chronic bronchitis phenotypes.

    Science.gov (United States)

    Silverman, Edwin K; Mosley, Jonathan D; Palmer, Lyle J; Barth, Matthew; Senter, Jody M; Brown, Alison; Drazen, Jeffrey M; Kwiatkowski, David J; Chapman, Harold A; Campbell, Edward J; Province, Michael A; Rao, D C; Reilly, John J; Ginns, Leo C; Speizer, Frank E; Weiss, Scott T

    2002-03-15

    Familial aggregation of chronic obstructive pulmonary disease (COPD) has been demonstrated, but linkage analysis of COPD-related phenotypes has not been reported previously. An autosomal 10 cM genome-wide scan of short tandem repeat (STR) polymorphic markers was analyzed for linkage to COPD-related phenotypes in 585 members of 72 pedigrees ascertained through severe, early-onset COPD probands without severe alpha1-antitrypsin deficiency. Multipoint non-parametric linkage analysis (using the ALLEGRO program) was performed for qualitative phenotypes including moderate airflow obstruction [forced expiratory volume at one second (FEV(1)) < 60% predicted, FEV(1)/FVC < 90% predicted], mild airflow obstruction (FEV(1) < 80% predicted, FEV(1)/FVC < 90% predicted) and chronic bronchitis. The strongest evidence for linkage in all subjects was observed at chromosomes 12 (LOD = 1.70) and 19 (LOD = 1.54) for moderate airflow obstruction, chromosomes 8 (LOD = 1.36) and 19 (LOD = 1.09) for mild airflow obstruction and chromosomes 19 (LOD = 1.21) and 22 (LOD = 1.37) for chronic bronchitis. Restricting analysis to cigarette smokers only provided increased evidence for linkage of mild airflow obstruction and chronic bronchitis to several genomic regions; for mild airflow obstruction in smokers only, the maximum LOD was 1.64 at chromosome 19, whereas for chronic bronchitis in smokers only, the maximum LOD was 2.08 at chromosome 22. On chromosome 12p, 12 additional STR markers were genotyped, which provided additional support for an airflow obstruction locus in that region with a non-parametric multipoint approach for moderate airflow obstruction (LOD = 2.13) and mild airflow obstruction (LOD = 1.43). Using a dominant model with the STR markers on 12p, two point parametric linkage analysis of all subjects demonstrated a maximum LOD score of 2.09 for moderate airflow obstruction and 2.61 for mild airflow obstruction. In smokers only, the maximum two point LOD score for mild airflow

  7. Apgar Scores: Examining the Long-term Significance

    OpenAIRE

    Montgomery, Kristen S.

    2000-01-01

    The Apgar scoring system was intended as an evaluative measure of a newborn's condition at birth and of the need for immediate attention. In the most recent past, individuals have unsuccessfully attempted to link Apgar scores with long-term developmental outcomes. This practice is not appropriate, as the Apgar score is currently defined. Expectant parents need to be aware of the limitations of the Apgar score and its appropriate uses.

  8. Risk scores-the modern Oracle of Delphi?

    Science.gov (United States)

    Kronenberg, Florian; Schwaiger, Johannes P

    2017-03-01

    Recently, 4 new risk scores for the prediction of mortality and cardiovascular events were especially tailored for hemodialysis patients; these scores performed much better than previous scores. Tripepi et al. found that these risk scores were even more predictive for all-cause and cardiovascular death than the measurement of the left ventricular mass index was. Nevertheless, the investigation of left ventricular mass and function has its own place for other reasons.

  9. Understanding and Using Factor Scores: Considerations for the Applied Researcher

    OpenAIRE

    Christine DiStefano; Min Zhu; Diana Mindrila

    2009-01-01

    Following an exploratory factor analysis, factor scores may be computed and used in subsequent analyses. Factor scores are composite variables which provide information about an individual's placement on the factor(s). This article discusses popular methods to create factor scores under two different classes: refined and non-refined. Strengths and considerations of the various methods, and for using factor scores in general, are discussed.

  10. Understanding and Using Factor Scores: Considerations for the Applied Researcher

    Directory of Open Access Journals (Sweden)

    Christine DiStefano

    2009-10-01

    Full Text Available Following an exploratory factor analysis, factor scores may be computed and used in subsequent analyses. Factor scores are composite variables which provide information about an individual's placement on the factor(s. This article discusses popular methods to create factor scores under two different classes: refined and non-refined. Strengths and considerations of the various methods, and for using factor scores in general, are discussed.

  11. Impact of clinical, psychological, and social factors on decreased Tinetti test score in community-living elderly subjects: a prospective study with two-year follow-up.

    Science.gov (United States)

    Manckoundia, Patrick; Thomas, Frédérique; Buatois, Séverine; Guize, Louis; Jégo, Bertrand; Aquino, Jean-Pierre; Benetos, Athanase

    2008-06-01

    Balance and gait are essential to maintain physical autonomy, particularly in elderly people. Thus the detection of risk factors of balance and gait impairment appears necessary in order to prevent falls and dependency. The objective of this study was to analyze the impact of demographic, social, clinical, psychological, and biological parameters on the decline in balance and gait assessed by the Tinetti test (TT) after a two-year follow-up. This prospective study was conducted among community-living, young elderly volunteers in the centre "Investigations Preventives et Cliniques" and "Observatoire De l'Age" (Paris, France). Three hundred and forty-four participants aged 63.5 on average were enrolled and performed the TT twice, once at inclusion and again two years later. After the two-year follow-up, two groups were constituted according to whether or not there was a decrease in the TT score: the "TT no-deterioration" group comprised subjects with a decrease of less than two points and the "TT deterioration" group comprised those with a decrease of two points or more. Selected demographic, social, clinical, psychological, and biological parameters for the two groups were then compared. Statistical analysis showed that female sex, advanced age, high body mass index, osteoarticular pain, and a high level of anxiety all have a negative impact on TT score. Knowledge of predictive factors of the onset or worsening of balance and gait disorders could allow clinicians to detect young elderly people who should benefit from a specific prevention program.

  12. Association of dietary diversity score with anxiety in women.

    Science.gov (United States)

    Poorrezaeian, Mina; Siassi, Fereydoun; Qorbani, Mostafa; Karimi, Javad; Koohdani, Fariba; Asayesh, Hamid; Sotoudeh, Gity

    2015-12-15

    Evidence suggests that diet plays an important role in the development of mental disorders, especially anxiety. Dietary diversity score is an indicator for assessing diet quality. However, its association with anxiety has not been investigated. The aim of this study was to examine the association of dietary diversity score with anxiety. A cross-sectional study was conducted among 360 women attending health centers in the south of Tehran in 2014. General information among others were collected. Weight, height and waist circumference were measured and body mass index (BMI) was calculated. Dietary intake and anxiety score were assessed using a 24-h dietary recall and Depression, Anxiety, Stress Scales (DASS) questionnaires, respectively. Dietary diversity score was computed according to the guidelines of FAO. About 35% of the participants were found to exhibit anxiety. The dietary diversity score in 12.5% of the subjects were between 1 and 3 (low dietary diversity score) but 87.5% scored between 4 and 7 (high dietary diversity score). The adjusted mean of anxiety score in subjects with high dietary diversity score was significantly lower than those with low dietary diversity score. Dietary diversity score was found to be inversely associated with anxiety. However, the causality between anxiety and dietary diversity could not be determined.

  13. Demystifying the GMAT: Where Do Scale Scores Comes from?

    Science.gov (United States)

    Rudner, Lawrence M.

    2012-01-01

    GMAT (Graduate Management Admission Test) scaled scores convey the same level of ability over time, and GMAT percentiles convey the competitiveness of scores relative to today's GMAT test takers. In an earlier column, the author discussed the role of the GMAT scaled scores and percentiles. Here, he gets more technical and discusses how GMAT scaled…

  14. Understanding and Using Factor Scores: Considerations for the Applied Researcher

    Science.gov (United States)

    DiStefano, Christine; Zhu, Min; Mindrila, Diana

    2009-01-01

    Following an exploratory factor analysis, factor scores may be computed and used in subsequent analyses. Factor scores are composite variables which provide information about an individual's placement on the factor(s). This article discusses popular methods to create factor scores under two different classes: refined and non-refined. Strengths and…

  15. Comparison of WPPSI and VMI Scores of Intellectually Bright Children.

    Science.gov (United States)

    Hawthorne, Linda White; And Others

    1983-01-01

    Standard scores of 233 gifted four to six year olds on the Geometric Design subtest of the Wechsler Preschool and Primary Scale of Intelligence correlated significantly with standard scores on the Development Test of Visual Motor Integration (VMI), but the VMI yielded significantly lower scores than Geometric Design. (Author/CL)

  16. Regression Discontinuity Designs with Multiple Rating-Score Variables

    Science.gov (United States)

    Reardon, Sean F.; Robinson, Joseph P.

    2012-01-01

    In the absence of a randomized control trial, regression discontinuity (RD) designs can produce plausible estimates of the treatment effect on an outcome for individuals near a cutoff score. In the standard RD design, individuals with rating scores higher than some exogenously determined cutoff score are assigned to one treatment condition; those…

  17. Discrepancy Score Reliabilities in the WISC-IV Standardization Sample

    Science.gov (United States)

    Glass, Laura A.; Ryan, Joseph J.; Charter, Richard A.; Bartels, Jared M.

    2009-01-01

    This investigation provides internal consistency reliabilities for Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV) subtest and index discrepancy scores using the standardization sample as the data source. Reliabilities range from 0.50 to 0.82 for subtest discrepancy scores and from 0.78 to 0.88 for index discrepancy scores.…

  18. Statistical Assessment of Estimated Transformations in Observed-Score Equating

    Science.gov (United States)

    Wiberg, Marie; González, Jorge

    2016-01-01

    Equating methods make use of an appropriate transformation function to map the scores of one test form into the scale of another so that scores are comparable and can be used interchangeably. The equating literature shows that the ways of judging the success of an equating (i.e., the score transformation) might differ depending on the adopted…

  19. LCA single score analysis of man-made cellulose fibres

    NARCIS (Netherlands)

    Shen, L.; Patel, M.K.

    2010-01-01

    In this study, the LCA report “Life Cycle assessment of man-made cellulose fibres” [3] is extended to the single score analysis in order to provide an additional basis for decision making. The single score analysis covers 9 to 11 environmental impact categories. Three single score methods (Single Sc

  20. Dichotomous decisions based on dichotomously scored items: a case study

    NARCIS (Netherlands)

    Mellenbergh, G.J.; Koppelaar, H.; Linden, van der W.J.

    1977-01-01

    In a course in elementary statistics for psychology students using criterion-referenced achievement tests, the total test score, based on dichotomously scored items, was used for classifying students into those who passed and those who failed. The score on a test is considered as depending on a late

  1. Personality and Examination Score Correlates of Abnormal Psychology Course Ratings.

    Science.gov (United States)

    Pauker, Jerome D.

    The relationship between the ratings students assigned to an evening undergraduate abnormal psychology class and their scores on objective personality tests and course examinations was investigated. Students (N=70) completed the MMPI and made global ratings of the course; these scores were correlated separately by sex with the T scores of 13 MMPI…

  2. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

    Science.gov (United States)

    Jin, Tan; Mak, Barley; Zhou, Pei

    2012-01-01

    The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

  3. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

    Science.gov (United States)

    Jin, Tan; Mak, Barley; Zhou, Pei

    2012-01-01

    The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

  4. A Procedure for Linear Polychotomous Scoring of Test Items

    Science.gov (United States)

    1993-10-01

    associated with the response categories of test items . When tests are scored using these scoring weights, test reliability increases. The new procedure is...program POLY. The example demonstrates how polyweighting can be used to calibrate and score test items drawn from an item bank that is too large to

  5. Polygenic risk scores for schizophrenia and bipolar disorder predict creativity

    NARCIS (Netherlands)

    Power, R.A.; Steinberg, S.; Bjornsdottir, G.; Rietveld, C.A.; Abdellaoui, A.; Nivard, M.M.; Johannesson, M.; Galesloot, T.E.; Hottenga, J.J.; Willemsen, G.; Cesarini, D.; Benjamin, D.J.; Magnusson, P.K.; Ullen, F.; Tiemeier, H.; Hofman, A.; Rooij, F.J. van; Walters, G.B.; Sigurdsson, E.; Thorgeirsson, T.E.; Ingason, A.; Helgason, A.; Kong, A.; Kiemeney, B.; Koellinger, P.; Boomsma, D.I.; Gudbjartsson, D.; Stefansson, H.; Stefansson, K.

    2015-01-01

    We tested whether polygenic risk scores for schizophrenia and bipolar disorder would predict creativity. Higher scores were associated with artistic society membership or creative profession in both Icelandic (P = 5.2 x 10(-6) and 3.8 x 10(-6) for schizophrenia and bipolar disorder scores, respectiv

  6. The Test Score Decline: A Review and Annotated Bibliography

    Science.gov (United States)

    1981-08-01

    J.R., The Test Score Decline: Are the Public Schools the Scapegoat? Part Two =129. K%’apfer. P., Kapfer , M., & Woodruff, A., Declining Test Scores...Michigan State University, August 1976. 129. Kapfer , P.F., Kapfer , M.B., & Woodruff, A.D., Declining test scores: Inter- pretations, issues, and relationship

  7. Automatic Dialogue Scoring for a Second Language Learning System

    Science.gov (United States)

    Huang, Jin-Xia; Lee, Kyung-Soon; Kwon, Oh-Woog; Kim, Young-Kil

    2016-01-01

    This paper presents an automatic dialogue scoring approach for a Dialogue-Based Computer-Assisted Language Learning (DB-CALL) system, which helps users learn language via interactive conversations. The system produces overall feedback according to dialogue scoring to help the learner know which parts should be more focused on. The scoring measures…

  8. Development of an automated scoring system for plant comet assay

    Directory of Open Access Journals (Sweden)

    Bertrand Pourrut

    2015-05-01

    -\tnucleus density: increase the density of nuclei is of importance to increase scoring reliability (Sharma et al., 2012. In conclusion, increasing plant nucleus extraction yield and automated scoring of nuclei do represent big challenges. However, our promising preliminary results open up the perspective of an automated high-throughput scoring of plant nuclei.

  9. The Simplified Predictive Intubation Difficulty Score: a new weighted score for difficult airway assessment.

    Science.gov (United States)

    L'Hermite, Joël; Nouvellon, Emmanuel; Cuvillon, Philippe; Fabbro-Peray, Pascale; Langeron, Olivier; Ripart, Jacques

    2009-12-01

    Using the Intubation Difficulty Scale (IDS) more than 5 as a standardized definition of difficult intubation, we propose a new score to predict difficult intubation: the Simplified Predictive Intubation Difficulty Score (SPIDS). We prospectively studied 1024 patients scheduled for elective surgery under general anaesthesia. Using bivariate and multivariable analysis, we established risk factors of difficult intubation. Then, we assigned point values to each of the adjusted risk factors, their sum composing the SPIDS. We assessed its predictive accuracy using sensitivity, specificity, positive (PPV) and negative predictive values (NPV), and the area under the receiver operating characteristic (ROC) curve (AUC), and compared it with the corresponding nonweighted score. The optimal predictive level of the SPIDS was determined using ROC curve analysis. We found five adjusted risk factors for IDS more than 5: pathological conditions associated with difficult intubation (malformation of the face, acromegaly, cervical rheumatism, tumours of the airway, and diabetes mellitus), mouth opening less than 3.5 cm, a ratio of patient's height to thyromental distance 25 at least, head and neck movement less than 80 degrees , and Mallampati 2 at least. Sensitivity, specificity, PPV and NPV of the SPIDS were 65, 76, 14 and 97%, respectively. AUC of the SPIDS and the nonweighted score (obtained previously using a stepwise logistic regression) were respectively 0.78 [95% confidence interval (CI) 0.72-0.84] and 0.69 (95% CI 0.64-0.73). The threshold for an optimal predictive level of the SPIDS was above 10 of 55. The SPIDS seems easy to perform, and by weighting risk factors of difficult intubation, it could help anaesthesiologists to plan a difficult airway management strategy. A value of SPIDS strictly above 10 could encourage the anaesthesiologists to plan for the beginning of the anaesthetic induction with 'alternative' airway devices ready in the operating theatre.

  10. Trainee Occupational Therapists Scoring the Barthel ADL.

    Science.gov (United States)

    Martin, Elizabeth; Nugent, Chris; Bond, Raymond; Martin, Suzanne

    2015-09-01

    Within medical applications there are two main types of information design; paper-based and digital information [1]. As technology is constantly changing, information within healthcare management and delivery is continually being transitioned from traditional paper documents to digital and online resources. Activity of Daily Living (ADL) charts are still predominantly paper based and are therefore prone to "human error" [2]. In light of this, an investigation has taken place into the design for reducing the amount of human error, between a paper based ADL, specifically the Barthel Index, and the same ADL created digitally. The digital ADL was developed as an online platform as this offers the best method of data capture for a large group of participants all together [3]. The aim of the study was to evaluate the usability of the Barthel Index ADL in paper format and then reproduce the same ADL digitally. This paper presents the findings of a study involving 26 participants who were familiar with ADL charts, and used three scenarios requiring them to complete both a paper ADL and a digital ADL. An evaluation was undertaken to ascertain if there were any 'human errors' in completing the paper ADL and also to find similarities/differences through using the digital ADL. The results from the study indicated that 22/26 participants agreed that the digital ADL was better, if not the same as a paper based ADL. Further results indicated that participants rate highly the added benefit of the digital ADL being easy to use and also that calculation of assessment scores were performed automatically. Statistically the digital BI offered a 100 % correction rate in the total calculation, in comparison to the paper based BI where it is more common for users to make mathematical calculation errors. Therefore in order to minimise handwriting and calculation errors, the digital BI proved superior than the traditional paper based method.

  11. Gambling score in earthquake prediction analysis

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  12. Proposal of a Mediterranean Diet Serving Score.

    Directory of Open Access Journals (Sweden)

    Celia Monteagudo

    Full Text Available Numerous studies have demonstrated a relationship between Mediterranean Diet (MD adherence and the prevention of cardiovascular diseases, cancer, and diabetes, etc. The study aim was to validate a novel instrument to measure MD adherence based on the consumption of food servings and food groups, and apply it in a female population from southern Spain and determining influential factors.The study included 1,155 women aged 12-83 yrs, classified as adolescents, adults, and over-60-yr-olds. All completed a validated semi-quantitative food frequency questionnaire (FFQ. The Mediterranean Dietary Serving Score (MDSS is based on the latest update of the Mediterranean Diet Pyramid, using the recommended consumption frequency of foods and food groups; the MDSS ranges from 0 to 24. The discriminative power or correct subject classification capacity of the MDSS was analyzed with the Receiver Operating Characteristic (ROC curve, using the MDS as reference method. Predictive factors for higher MDSS adherence were determined with a logistic regression model, adjusting for age. According to ROC curve analysis, MDSS evidenced a significant discriminative capacity between adherents and non-adherents to the MD pattern (optimal cutoff point=13.50; sensitivity=74%; specificity=48%. The mean MDSS was 12.45 (2.69 and was significantly higher with older age (p<0.001. Logistic regression analysis showed highest MD adherence by over 60-year-olds with low BMI and no habit of eating between meals.The MDSS is an updated, easy, valid, and accurate instrument to assess MD adherence based on the consumption of foods and food groups per meal, day, and week. It may be useful in future nutritional education programs to prevent the early onset of chronic non-transmittable diseases in younger populations.

  13. The t-core of an s-core

    OpenAIRE

    Fayers, Matthew

    2010-01-01

    We consider the $t$-core of an $s$-core partition, when $s$ and $t$ are coprime positive integers. Olsson has shown that the $t$-core of an $s$-core is again an $s$-core, and we examine certain actions of the affine symmetric group on $s$-cores which preserve the $t$-core of an $s$-core. Along the way, we give a new proof of Olsson's result. We also give a new proof of a result of Vandehey, showing that there is a simultaneous $s$- and $t$-core which contains all others.

  14. The t-core of an s-core

    OpenAIRE

    Fayers, Matthew

    2010-01-01

    We consider the $t$-core of an $s$-core partition, when $s$ and $t$ are coprime positive integers. Olsson has shown that the $t$-core of an $s$-core is again an $s$-core, and we examine certain actions of the affine symmetric group on $s$-cores which preserve the $t$-core of an $s$-core. Along the way, we give a new proof of Olsson's result. We also give a new proof of a result of Vandehey, showing that there is a simultaneous $s$- and $t$-core which contains all others.

  15. Validity of the J-CTO Score and the CL-Score for predicting successful CTO recanalization.

    Science.gov (United States)

    Guelker, J E; Bansemir, L; Ott, R; Rock, T; Kroeger, K; Guelker, R; Klues, H G; Shin, D I; Bufe, A

    2017-03-01

    Percutaneous coronary intervention (PCI) of total chronic coronary occlusion (CTO) still remains a major challenge in interventional cardiology. To predict the probability of a successful intervention different scoring systems are available. We analyzed in this study the validity of two scoring systems, the Japanese CTO score (J-CTO score) and the newly developed Clinical and Lesion-related score (CL Score). Between 2012 and 2015 we included 379 consecutive patients. They underwent PCI for at least one CTO. Antegrade and retrograde CTO techniques were applied. The retrograde approach was used only after failed antegrade intervention. Patients undergoing CTO PCI were mainly men (84%). The overall procedural success rate was 84% (±0.4). The mean J-CTO score was 2.9 (±1.3) and the mean CL score was 4.3 (±1.7). The CL score predicted more precisely the interventional results than the J-CTO score. Our study suggests that the previously presented CL score is superior to the J-CTO score in identifying CTO lesions with a likelihood for successful recanalization. Generally it appears to be a helpful tool for selecting patients and identifying the appropriate operator. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Dual-energy X-ray absorptiometry diagnostic discordance between Z-scores and T-scores in young adults.

    LENUS (Irish Health Repository)

    Carey, John J

    2009-01-01

    Diagnostic criteria for postmenopausal osteoporosis using central dual-energy X-ray absorptiometry (DXA) T-scores have been widely accepted. The validity of these criteria for other populations, including premenopausal women and young men, has not been established. The International Society for Clinical Densitometry (ISCD) recommends using DXA Z-scores, not T-scores, for diagnosis in premenopausal women and men aged 20-49 yr, though studies supporting this position have not been published. We examined diagnostic agreement between DXA-generated T-scores and Z-scores in a cohort of men and women aged 20-49 yr, using 1994 World Health Organization and 2005 ISCD DXA criteria. Four thousand two hundred and seventy-five unique subjects were available for analysis. The agreement between DXA T-scores and Z-scores was moderate (Cohen\\'s kappa: 0.53-0.75). The use of Z-scores resulted in significantly fewer (McNemar\\'s p<0.001) subjects diagnosed with "osteopenia," "low bone mass for age," or "osteoporosis." Thirty-nine percent of Hologic (Hologic, Inc., Bedford, MA) subjects and 30% of Lunar (GE Lunar, GE Madison, WI) subjects diagnosed with "osteoporosis" by T-score were reclassified as either "normal" or "osteopenia" when their Z-score was used. Substitution of DXA Z-scores for T-scores results in significant diagnostic disagreement and significantly fewer persons being diagnosed with low bone mineral density.

  17. A Study of the Predictability of Praxis I Examination Scores from ACT Scores and Teacher Education Program Prerequisite Courses

    Science.gov (United States)

    Henderson, Allen R.

    2013-01-01

    This study investigated the relationship between student enrollment in certain college courses and Praxis I scores. Specifically, the study examined the predictive nature of the relationships between students' grades in college algebra, their freshman English course of choice, their ACT scores, and their Praxis I scores. The subjects consisted of…

  18. Examining the reliability of ADAS-Cog change scores.

    Science.gov (United States)

    Grochowalski, Joseph H; Liu, Ying; Siedlecki, Karen L

    2016-09-01

    The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.

  19. Scoring dynamics across professional team sports: tempo, balance and predictability

    CERN Document Server

    Merritt, Sears

    2013-01-01

    Despite growing interest in quantifying and modeling the scoring dynamics within professional sports games, relative little is known about what patterns or principles, if any, cut across different sports. Using a comprehensive data set of scoring events in nearly a dozen consecutive seasons of college and professional (American) football, professional hockey, and professional basketball, we identify several common patterns in scoring dynamics. Across these sports, scoring tempo---when scoring events occur---closely follows a common Poisson process, with a sport-specific rate. Similarly, scoring balance---how often a team wins an event---follows a common Bernoulli process, with a parameter that effectively varies with the size of the lead. Combining these processes within a generative model of gameplay, we find they both reproduce the observed dynamics in all four sports and accurately predict game outcomes. These results demonstrate common dynamical patterns underlying within-game scoring dynamics across prof...

  20. QUASAR--scoring and ranking of sequence-structure alignments.

    Science.gov (United States)

    Birzele, Fabian; Gewehr, Jan E; Zimmer, Ralf

    2005-12-15

    Sequence-structure alignments are a common means for protein structure prediction in the fields of fold recognition and homology modeling, and there is a broad variety of programs that provide such alignments based on sequence similarity, secondary structure or contact potentials. Nevertheless, finding the best sequence-structure alignment in a pool of alignments remains a difficult problem. QUASAR (quality of sequence-structure alignments ranking) provides a unifying framework for scoring sequence-structure alignments that aids finding well-performing combinations of well-known and custom-made scoring schemes. Those scoring functions can be benchmarked against widely accepted quality scores like MaxSub, TMScore, Touch and APDB, thus enabling users to test their own alignment scores against 'standard-of-truth' structure-based scores. Furthermore, individual score combinations can be optimized with respect to benchmark sets based on known structural relationships using QUASAR's in-built optimization routines.

  1. Efficacy of catheter ablation of atrial fibrillation beyond HATCH score

    Institute of Scientific and Technical Information of China (English)

    TANG Ri-bo; DONG Jian-zeng; LONG De-yong; YU Rong-hui; NING Man; JIANG Chen-xi; SANG Cai-hua; LIU Xiao-hui; MA Chang-sheng

    2012-01-01

    Background HATCH score is an established predictor of progression from paroxysmal to persistent atrial fibrillation (AF).The purpose of this study was to determine if HATCH score could predict recurrence after catheter ablation of AF.Methods The data of 488 consecutive paroxysmal AF patients who underwent an index circumferential pulmonary veins (PV) ablation were retrospectively analyzed.Of these patients,250 (51.2%) patients had HATCH score=0,185(37.9%) patients had HATCH score=1,and 53 (10.9%) patients had HATCH score >2 (28 patients had HATCH score=2,23 patients had HATCH score=3,and 2 patients had HATCH score=4).Results The patients with HATCH score >2 had significantly larger left atrium size,the largest left ventricular end systolic diameter,and the lowest ejection fraction.After a mean follow-up of (823±532) days,the recurrence rates were 36.4%,37.8% and 28.3% from the HATCH score=0,HATCH score=1 to HATCH score >2 categories (P=0.498).Univariate analysis revealed that left atrium size,body mass index,and failure of PV isolation were predictors of AF recurrence.After adjustment for body mass index,left atrial size and PV isolation,the HATCH score was not an independent predictor of recurrence (HR=0.92,95% confidence interval=0.76-1.12,P=0.406) in multivariate analysis.Conclusion HATCH score has no value in prediction of AF recurrence after catheter ablation.

  2. Association between value-based purchasing score and hospital characteristics

    Directory of Open Access Journals (Sweden)

    Borah Bijan J

    2012-12-01

    Full Text Available Abstract Background Medicare hospital Value-based purchasing (VBP program that links Medicare payments to quality of care will become effective from 2013. It is unclear whether specific hospital characteristics are associated with a hospital’s VBP score, and consequently incentive payments. The objective of the study was to assess the association of hospital characteristics with (i the mean VBP score, and (ii specific percentiles of the VBP score distribution. The secondary objective was to quantify the associations of hospital characteristics with the VBP score components: clinical process of care (CPC score and patient satisfaction score. Methods Observational analysis that used data from three sources: Medicare Hospital Compare Database, American Hospital Association 2010 Annual Survey and Medicare Impact File. The final study sample included 2,491 U.S. acute care hospitals eligible for the VBP program. The association of hospital characteristics with the mean VBP score and specific VBP score percentiles were assessed by ordinary least square (OLS regression and quantile regression (QR, respectively. Results VBP score had substantial variations, with mean score of 30 and 60 in the first and fourth quartiles of the VBP score distribution. For-profit status (vs. non-profit, smaller bed size (vs. 100–199 beds, East South Central region (vs. New England region and the report of specific CPC measures (discharge instructions, timely provision of antibiotics and beta blockers, and serum glucose controls in cardiac surgery patients were positively associated with mean VBP scores (p Conclusions Although hospitals serving the poor and the elderly are more likely to score lower under the VBP program, the correlation appears small. Profit status, geographic regions, number and type of CPC measures reported explain the most variation among scores.

  3. Scoring systems for predicting mortality after liver transplantation.

    Directory of Open Access Journals (Sweden)

    Heng-Chih Pan

    Full Text Available BACKGROUND: Liver transplantation can prolong survival in patients with end-stage liver disease. We have proposed that the Sequential Organ Failure Assessment (SOFA score calculated on post-transplant day 7 has a great discriminative power for predicting 1-year mortality after liver transplantation. The Chronic Liver Failure-Sequential Organ Failure Assessment (CLIF-SOFA score, a modified SOFA score, is a newly developed scoring system exclusively for patients with end-stage liver disease. This study was designed to compare the CLIF-SOFA score with other main scoring systems in outcome prediction for liver transplant patients. METHODS: We retrospectively reviewed medical records of 323 patients who had received liver transplants in a tertiary care university hospital from October 2002 to December 2010. Demographic parameters and clinical characteristic variables were recorded on the first day of admission before transplantation and on post-transplantation days 1, 3, 7, and 14. RESULTS: The overall 1-year survival rate was 78.3% (253/323. Liver diseases were mostly attributed to hepatitis B virus infection (34%. The CLIF-SOFA score had better discriminatory power than the Child-Pugh points, Model for End-Stage Liver Disease (MELD score, RIFLE (risk of renal dysfunction, injury to the kidney, failure of the kidney, loss of kidney function, and end-stage kidney disease criteria, and SOFA score. The AUROC curves were highest for CLIF-SOFA score on post-liver transplant day 7 for predicting 1-year mortality. The cumulative survival rates differed significantly for patients with a CLIF-SOFA score ≤8 and those with a CLIF-SOFA score >8 on post-liver transplant day 7. CONCLUSION: The CLIF-SOFA score can increase the prediction accuracy of prognosis after transplantation. Moreover, the CLIF-SOFA score on post-transplantation day 7 had the best discriminative power for predicting 1-year mortality after liver transplantation.

  4. Which score should be used for posttraumatic multiple organ failure? - Comparison of the MODS, Denver- and SOFA- Scores.

    Science.gov (United States)

    Fröhlich, Matthias; Wafaisade, Arasch; Mansuri, Anastasios; Koenen, Paola; Probst, Christian; Maegele, Marc; Bouillon, Bertil; Sakka, Samir G

    2016-11-03

    Multiple organ dysfunction and multiple organ failure (MOF) is still a major complication and challenge in the treatment of severely injured patients. The incidence varies decisively in current studies, which complicates the comparability regarding risk factors, treatment recommendations and patients' outcome. Therefore, we analysed how the currently used scoring systems, the MODS, Denver- and SOFA Score, influence the definition and compared the scores' predictive ability. Out of datasets of severely injured patients (ISS ≥ 16, Age ≥ 16) staying more tha 48 h on the ICU, the scores were calculated, respectively. The scores' predictive ability on day three after trauma for resource requiring measurements and patient specific outcomes were compared using receiver-operating characteristics. One hundred seventy-six patients with a mean ISS 28 ± 13 could be included. MODS and SOFA score defined the incidence of MOF consistently (46.5 % vs. 52.3 %), while the Denver score defined MOF in 22.2 %. The MODS outperformed Denver- and SOFA score in predicting mortality (area under the curve/AUC: 0.83 vs. 0.67 vs. 0.72), but was inferior predicting the length of stay (AUC 0.71 vs.0.80 vs.0.82) and a prolonged time on mechanical ventilation (AUC 0.75 vs. 0.81 vs. 0.84). MODS and SOFA score were comparably sensitive and the Denver score more specific in all analyses. All three scores have a comparable ability to predict the outcome in trauma patients including patients with severe traumatic brain injury (TBI). Either score could be favored depending weather a higher sensitivity or specificity is targeted. The SOFA score showed the most balanced relation of sensitivity and specificity. The incidence of posttraumatic MOF relies decisively on the score applied. Therefore harmonizing the competing scores and definitions is desirable.

  5. New reliable scoring system, Toyama mouse score, to evaluate locomotor function following spinal cord injury in mice.

    Science.gov (United States)

    Shigyo, Michiko; Tanabe, Norio; Kuboyama, Tomoharu; Choi, Song-Hyen; Tohda, Chihiro

    2014-06-03

    Among the variety of methods used to evaluate locomotor function following a spinal cord injury (SCI), the Basso Mouse Scale score (BMS) has been widely used for mice. However, the BMS mainly focuses on hindlimb movement rather than on graded changes in body support ability. In addition, some of the scoring methods include double or triple criteria within a single score, which likely leads to an increase in the deviation within the data. Therefore we aimed to establish a new scoring method reliable and easy to perform in mice with SCI. Our Toyama Mouse Score (TMS) was established by rearranging and simplifying the BMS score and combining it with the Body Support Scale score (BSS). The TMS reflects changes in both body support ability and hindlimb movement. The definition of single score is made by combing multiple criteria in the BMS. The ambiguity was improved in the TMS. Using contusive SCI mice, hindlimb function was measured using the TMS, BMS and BSS systems. The TMS could distinguish changes in hindlimb movements that were evaluated as the same score by the BMS. An analysis of the coefficient of variation (CV) of score points recorded for 11 days revealed that the CV for the TMS was significantly lower than the CV obtained using the BMS. A variation in intra evaluators was lower in the TMS than in the BMS. These results suggest that the TMS may be useful as a new reliable method for scoring locomotor function for SCI models.

  6. My max score AP statistics maximize your score in less time

    CERN Document Server

    Ross, Phd, Amanda

    2013-01-01

    The only study guide to offer expert, customized study plans for every student's needs You've had a year to study...but also a year to forget. As the AP test approaches, other guides reexamine the entire year of material. But only one guide identifies your strengths and weaknesses, then points you directly to the review you need most My Max Score, a new concept developed by AP teachers and exam graders, offers separate review materials for long-term prep and last-minute cram sessions-no matter when you start studying, This is just what you need-plus str

  7. Prediction of IOI-HA Scores Using Speech Reception Thresholds and Speech Discrimination Scores in Quiet

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme

    2014-01-01

    BACKGROUND: Outcome measures can be used to improve the quality of the rehabilitation by identifying and understanding which variables influence the outcome. This information can be used to improve outcomes for clients. In clinical practice, pure-tone audiometry, speech reception thresholds (SRTs......), and speech discrimination scores (SDSs) in quiet or in noise are common assessments made prior to hearing aid (HA) fittings. It is not known whether SRT and SDS in quiet relate to HA outcome measured with the International Outcome Inventory for Hearing Aids (IOI-HA). PURPOSE: The aim of the present study...

  8. Stability of cooperation under image scoring in group interactions.

    Science.gov (United States)

    Nax, Heinrich H; Perc, Matjaž; Szolnoki, Attila; Helbing, Dirk

    2015-07-15

    Image scoring sustains cooperation in the repeated two-player prisoner's dilemma through indirect reciprocity, even though defection is the uniquely dominant selfish behaviour in the one-shot game. Many real-world dilemma situations, however, firstly, take place in groups and, secondly, lack the necessary transparency to inform subjects reliably of others' individual past actions. Instead, there is revelation of information regarding groups, which allows for 'group scoring' but not for image scoring. Here, we study how sensitive the positive results related to image scoring are to information based on group scoring. We combine analytic results and computer simulations to specify the conditions for the emergence of cooperation. We show that under pure group scoring, that is, under the complete absence of image-scoring information, cooperation is unsustainable. Away from this extreme case, however, the necessary degree of image scoring relative to group scoring depends on the population size and is generally very small. We thus conclude that the positive results based on image scoring apply to a much broader range of informational settings that are relevant in the real world than previously assumed.

  9. A clinical prediction score for upper extremity deep venous thrombosis.

    Science.gov (United States)

    Constans, Joel; Salmi, Louis-Rachid; Sevestre-Pietri, Marie-Antoinette; Perusat, Sophie; Nguon, Monika; Degeilh, Maryse; Labarere, Jose; Gattolliat, Olivier; Boulon, Carine; Laroche, Jean-Pierre; Le Roux, Philippe; Pichot, Olivier; Quéré, Isabelle; Conri, Claude; Bosson, Jean-Luc

    2008-01-01

    It was the objective of this study to design a clinical prediction score for the diagnosis of upper extremity deep venous thrombosis (UEDVT). A score was built by multivariate logistic regression in a sample of patients hospitalized for suspicion of UEDVT (derivation sample). It was validated in a second sample in the same university hospital, then in a sample from the multicenter OPTIMEV study that included both outpatients and inpatients. In these three samples, UEDVT diagnosis was objectively confirmed by ultrasound. The derivation sample included 140 patients among whom 50 had confirmed UEDVT, the validation sample included 103 patients among whom 46 had UEDVT, and the OPTIMEV sample included 214 patients among whom 65 had UEDVT. The clinical score identified a combination of four items (venous material, localized pain, unilateral pitting edema and other diagnosis as plausible). One point was attributed to each item (positive for the first 3 and negative for the other diagnosis). A score of -1 or 0 characterized low probability patients, a score of 1 identified intermediate probability patients, and a score of 2 or 3 identified patients with high probability. Low probability score identified a prevalence of UEDVT of 12, 9 and 13%, respectively, in the derivation, validation and OPTIMEV samples. High probability score identified a prevalence of UEDVT of 70, 64 and 69% respectively. In conclusion we propose a simple score to calculate clinical probability of UEDVT. This score might be a useful test in clinical trials as well as in clinical practice.

  10. Credit scores, cardiovascular disease risk, and human capital.

    Science.gov (United States)

    Israel, Salomon; Caspi, Avshalom; Belsky, Daniel W; Harrington, HonaLee; Hogan, Sean; Houts, Renate; Ramrakha, Sandhya; Sanders, Seth; Poulton, Richie; Moffitt, Terrie E

    2014-12-02

    Credit scores are the most widely used instruments to assess whether or not a person is a financial risk. Credit scoring has been so successful that it has expanded beyond lending and into our everyday lives, even to inform how insurers evaluate our health. The pervasive application of credit scoring has outpaced knowledge about why credit scores are such useful indicators of individual behavior. Here we test if the same factors that lead to poor credit scores also lead to poor health. Following the Dunedin (New Zealand) Longitudinal Study cohort of 1,037 study members, we examined the association between credit scores and cardiovascular disease risk and the underlying factors that account for this association. We find that credit scores are negatively correlated with cardiovascular disease risk. Variation in household income was not sufficient to account for this association. Rather, individual differences in human capital factors—educational attainment, cognitive ability, and self-control—predicted both credit scores and cardiovascular disease risk and accounted for ∼45% of the correlation between credit scores and cardiovascular disease risk. Tracing human capital factors back to their childhood antecedents revealed that the characteristic attitudes, behaviors, and competencies children develop in their first decade of life account for a significant portion (∼22%) of the link between credit scores and cardiovascular disease risk at midlife. We discuss the implications of these findings for policy debates about data privacy, financial literacy, and early childhood interventions.

  11. Credit scores, cardiovascular disease risk, and human capital

    Science.gov (United States)

    Israel, Salomon; Caspi, Avshalom; Belsky, Daniel W.; Harrington, HonaLee; Hogan, Sean; Houts, Renate; Ramrakha, Sandhya; Sanders, Seth; Poulton, Richie; Moffitt, Terrie E.

    2014-01-01

    Credit scores are the most widely used instruments to assess whether or not a person is a financial risk. Credit scoring has been so successful that it has expanded beyond lending and into our everyday lives, even to inform how insurers evaluate our health. The pervasive application of credit scoring has outpaced knowledge about why credit scores are such useful indicators of individual behavior. Here we test if the same factors that lead to poor credit scores also lead to poor health. Following the Dunedin (New Zealand) Longitudinal Study cohort of 1,037 study members, we examined the association between credit scores and cardiovascular disease risk and the underlying factors that account for this association. We find that credit scores are negatively correlated with cardiovascular disease risk. Variation in household income was not sufficient to account for this association. Rather, individual differences in human capital factors—educational attainment, cognitive ability, and self-control—predicted both credit scores and cardiovascular disease risk and accounted for ∼45% of the correlation between credit scores and cardiovascular disease risk. Tracing human capital factors back to their childhood antecedents revealed that the characteristic attitudes, behaviors, and competencies children develop in their first decade of life account for a significant portion (∼22%) of the link between credit scores and cardiovascular disease risk at midlife. We discuss the implications of these findings for policy debates about data privacy, financial literacy, and early childhood interventions. PMID:25404329

  12. Relationship between framingham risk score and coronary artery calcium score in asymptomatic Korean individuals

    Energy Technology Data Exchange (ETDEWEB)

    Heo, So Young; Park, Noh Hyuck; Park, Chan Sub; Seong, Su Ok [Dept. of Radiology, Myongji Hospital, Seonam University College of Medicine, Goyang (Korea, Republic of)

    2016-02-15

    We explored the association between Framingham risk score (FRS) and coronary artery calcium score (CACS) in asymptomatic Korean individuals. We retrospectively analyzed 2216 participants who underwent routine health screening and CACS using the 64-slice multidetector computed tomography between January 2010 and June 2014. Relationship between CACS and FRS, and factors associated with discrepancy between CACS and FRS were analyzed. CACS and FRS were positively correlated (p < 0.0001). However, in 3.7% of participants with low coronary event risk and high CACS, age, male gender, smoker, hypertension, total cholesterol, diabetes mellitus, and body mass index (BMI; ≥ 35) were associated with the discrepancy. In the diagnostic prediction model for discrepancy, the receiver operating characteristic curve including factors associated with FRS, diastolic blood pressure (≥ 75 mm Hg), diabetes mellitus, and BMI (≥ 35) showed that the area under the curve was 0.854 (95% confidence interval, 0.819–0.890), indicating good sensitivity. Diabetes mellitus or obesity (BMI ≥ 35) compensate for the weakness of FRS and may be potential indicators for application of CACS in asymptomatic Koreans with low coronary event risk.

  13. Optical Music Recognition for Scores Written in White Mensural Notation

    Directory of Open Access Journals (Sweden)

    Antonio Oliver

    2009-01-01

    Full Text Available An Optical Music Recognition (OMR system especially adapted for handwritten musical scores of the XVII-th and the early XVIII-th centuries written in white mensural notation is presented. The system performs a complete sequence of analysis stages: the input is the RGB image of the score to be analyzed and, after a preprocessing that returns a black and white image with corrected rotation, the staves are processed to return a score without staff lines; then, a music symbol processing stage isolates the music symbols contained in the score and, finally, the classification process starts to obtain the transcription in a suitable electronic format so that it can be stored or played. This work will help to preserve our cultural heritage keeping the musical information of the scores in a digital format that also gives the possibility to perform and distribute the original music contained in those scores.

  14. Improving personality facet scores with multidimensional computer adaptive testing

    DEFF Research Database (Denmark)

    Makransky, Guido; Mortensen, Erik Lykke; Glas, Cees A W

    2013-01-01

    Narrowly defined personality facet scores are commonly reported and used for making decisions in clinical and organizational settings. Although these facets are typically related, scoring is usually carried out for a single facet at a time. This method can be ineffective and time consuming when...... personality tests contain many highly correlated facets. This article investigates the possibility of increasing the precision of the NEO PI-R facet scores by scoring items with multidimensional item response theory and by efficiently administering and scoring items with multidimensional computer adaptive...... testing (MCAT). The increase in the precision of personality facet scores is obtained from exploiting the correlations between the facets. Results indicate that the NEO PI-R could be substantially shorter without attenuating precision when the MCAT methodology is used. Furthermore, the study shows...

  15. Automated sleep scoring and sleep apnea detection in children

    Science.gov (United States)

    Baraglia, David P.; Berryman, Matthew J.; Coussens, Scott W.; Pamula, Yvonne; Kennedy, Declan; Martin, A. James; Abbott, Derek

    2005-12-01

    This paper investigates the automated detection of a patient's breathing rate and heart rate from their skin conductivity as well as sleep stage scoring and breathing event detection from their EEG. The software developed for these tasks is tested on data sets obtained from the sleep disorders unit at the Adelaide Women's and Children's Hospital. The sleep scoring and breathing event detection tasks used neural networks to achieve signal classification. The Fourier transform and the Higuchi fractal dimension were used to extract features for input to the neural network. The filtered skin conductivity appeared visually to bear a similarity to the breathing and heart rate signal, but a more detailed evaluation showed the relation was not consistent. Sleep stage classification was achieved with and accuracy of around 65% with some stages being accurately scored and others poorly scored. The two breathing events hypopnea and apnea were scored with varying degrees of accuracy with the highest scores being around 75% and 30%.

  16. Scoring the SF-36 in Orthopaedics: A Brief Guide.

    Science.gov (United States)

    Laucis, Nicholas C; Hays, Ron D; Bhattacharyya, Timothy

    2015-10-07

    The Short Form-36 (SF-36) is the most widely used health-related quality-of-life measure in research to date. There are currently two sources for the SF-36 and scoring instructions: licensing them from Optum, Inc., or obtaining them from publicly available documentation from the RAND Corporation. The SF-36 yields eight scale scores and two summary scores. The physical component summary (PCS) and mental component summary (MCS) scores were derived using an orthogonal-factor analytic model that forced the PCS and MCS to be uncorrelated, and it has been shown to contribute to an inflation of the MCS in patients with substantial physical disability. Oblique scoring can reduce this inflation of the MCS in orthopaedic studies. Spreadsheets to score the SF-36, along with a copy of the questionnaire, are provided. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  17. SCORING IN ACUTE PANCREATITIS: WHEN IMAGING IS APPROPRIATE?.

    Science.gov (United States)

    Cucuteanu, B; Prelipcean, Cristina Cijevschi; Mihai, Cătălina; Dranga, Mihaela; Negru, D

    2016-01-01

    Acute pancreatitis (AP) is a frequent presentation to the emergency departments with a rising incidence and a great variability in clinical severity and outcome. The aim of this review is to offer a succinct presentation on acute pancreatitis scoring systems and the use of different imaging methods in severity prediction: Ranson criteria, Glasgow criteria, Hong Kong Score, Acute Physiology and Chronic Health Evaluation II (APACHE II), computed tomography scoring systems, Bedside Index of Severity in Acute Pancreatitis (BISAP) score, Panc 3, Japanese Severity Score (JSS), Harmless Acute Pancreatitis Score (HAPS), Pancreatitis Outcome Prediction (POP), Sequential Organ Failure Assessment (SOFA). This article also describes the Revised Atlanta Classification of AP (2012) and the correlation with computed tomography.

  18. Optical Music Recognition for Scores Written in White Mensural Notation

    Directory of Open Access Journals (Sweden)

    Tardón LorenzoJ

    2009-01-01

    Full Text Available An Optical Music Recognition (OMR system especially adapted for handwritten musical scores of the XVII-th and the early XVIII-th centuries written in white mensural notation is presented. The system performs a complete sequence of analysis stages: the input is the RGB image of the score to be analyzed and, after a preprocessing that returns a black and white image with corrected rotation, the staves are processed to return a score without staff lines; then, a music symbol processing stage isolates the music symbols contained in the score and, finally, the classification process starts to obtain the transcription in a suitable electronic format so that it can be stored or played. This work will help to preserve our cultural heritage keeping the musical information of the scores in a digital format that also gives the possibility to perform and distribute the original music contained in those scores.

  19. Validation of criterion-referenced archery cutting scores.

    Science.gov (United States)

    Ishee, J H; Titlow, L W

    1993-04-01

    This study investigated an empirical method for setting optimal cutting scores for a criterion-referenced archery test. The classification-outcome probabilities and approaches to validity suggested by Berk were utilized. Pretest scores were obtained on 35 uninstructed college-age women on six ends (six arrows each) from 20 yards (18.3 m) after an unrecorded warm-up end. Posttest scores were after 15 weeks of instruction. Score distributions were the primary determinant for accurately classifying students as true mastery and true nonmastery. Accuracy is a function of the amount of overlap between distributions. Using the point at which the distributions overlapped, classification accuracy was estimated. Probabilities associated with 80 points were p(TM) + p(TN) = .83 and p(FM) + p(FN) = .14. Scores above and below 80 points had lower probabilities of classification accuracy. Reliability estimated using Kappa was .59. Statistical validity of the cutting score (phi) was .68.

  20. TRII: A Probabilistic Scoring of Drosophila melanogaster Translation Initiation Sites

    Directory of Open Access Journals (Sweden)

    Rice Michael D

    2010-01-01

    Full Text Available Relative individual information is a measurement that scores the quality of DNA- and RNA-binding sites for biological machines. The development of analytical approaches to increase the power of this scoring method will improve its utility in evaluating the functions of motifs. In this study, the scoring method was applied to potential translation initiation sites in Drosophila to compute Translation Relative Individual Information (TRII scores. The weight matrix at the core of the scoring method was optimized based on high-confidence translation initiation sites identified by using a progressive partitioning approach. Comparing the distributions of TRII scores for sites of interest with those for high-confidence translation initiation sites and random sequences provides a new methodology for assessing the quality of translation initiation sites. The optimized weight matrices can also be used to describe the consensus at translation initiation sites, providing a quantitative measure of preferred and avoided nucleotides at each position.

  1. Existence and uniqueness of solutions of nonlinear two-point boundary value problems for fourth order differential equations%四阶微分方程非线性两点边值问题解的存在唯一性

    Institute of Scientific and Technical Information of China (English)

    高永馨; 谢燕华

    2012-01-01

    利用上下解方法,讨论了四阶微分方程非线性两点边值问题y(4) =f(x,y,y′,y″,y(′″)),y(b) =b0,y′(b) =b1,y″(b) =h(y″(a)),g(y(a),y(b),y′(a),y′(b),y″(a),y″(b),y(′″)(a),y(′″)(b)) =0解的存在唯一性.%By using the method of upper - lower solution,the existence and uniquenss of solutions of nonlinear two -point boundary value problems for fourth order differential equation y(4) =f(x,y,y′,y″,y(′″)),y(b) =b0,y′(b) =b1,y″(b) =h(y″(a)),g(y(a),y(b),y′(a),y′(b),y″(a),y″(b),y(′″)(a),y(′″)(b)) =0 are investigated.

  2. 次线性Emden—Fowler方程两点边值问题的C[0,1]正解的唯一性%The uniqueness of the C[ 0,1 ] positive solution of the two-point boundary value problem of the sublinear Emden-Fowler equations

    Institute of Scientific and Technical Information of China (English)

    刘炳; 闫宝强

    2012-01-01

    Two-point boundary value problem of the sublinear Emden-Fowler equations has been addressed in many literatures, but the uniqueness of the C[0,1 ] positive solution has not been investigated. We employ monotone iterative method to address such problem and derive the uniqueness of the C[ 0,1 ] positive solution of the boundary value problem of such equations.%次线性Emden-Fowler方程两点边值问题在很多文献中用到,但对于该类问题的C[0,1]正解的唯一性还没有研究。本文利用单调迭代方法,对这一问题进行了研究,得出了该类方程两点边值问题的C[0,1]正解是存在且唯一的。

  3. Calgary score and modified Calgary score in the differential diagnosis between neurally mediated syncope and epilepsy in children.

    Science.gov (United States)

    Zou, Runmei; Wang, Shuo; Zhu, Liping; Wu, Lijia; Lin, Ping; Li, Fang; Xie, Zhenwu; Li, Xiaohong; Wang, Cheng

    2017-01-01

    To evaluate the value of Calgary score and modified Calgary score in differential diagnosis between neurally mediated syncope and epilepsy in children. 201 children experienced one or more episodes of loss of consciousness and diagnosed as neurally mediated syncope or epilepsy were enrolled. Calgary score, modified Calgary score and receiver-operating characteristic curve were used to explore the predictive value in differential diagnosis. There were significant differences in median Calgary score between syncope [-4.00 (-6, 1)] and epilepsy [2 (-3, 5)] (z = -11.63, P epilepsy were 91.46 and 95.80 %, suggesting a diagnosis of epilepsy. There were significant differences in median modified Calgary score between syncope [-4.00 (-6, 1)] and epilepsy [3 (-3, 6)] (z = -11.71, P epilepsy. The sensitivity and specificity of modified Calgary score and Calgary score did not show significant differences (P > 0.05). Calgary score and modified Calgary score could be used to differential diagnosis between syncope and epilepsy in children.

  4. The Utility of Scoring Systems in Predicting Early and Late Mortality in Alcoholic Hepatitis: Whose Score Is It Anyway?

    Directory of Open Access Journals (Sweden)

    Naaventhan Palaniyappan

    2012-01-01

    Full Text Available Background. Alcoholic hepatitis (AH is a distinct clinical entity in the spectrum of alcoholic liver disease with a high short-term mortality. Several scoring systems are being used to assess the severity of AH but the ability of these scores to predict long-term survival in these patients is largely unknown. Aims. We aim to assess the utility of five different scoring systems Child Pugh (CP, model for end-stage liver disease (MELD, Maddrey’s discriminant function (mDF, Glasgow AH score (GAHS, and age-bilirubin-INR-creatinine (ABIC score in predicting shot-term and long-term survival in patients with AH. Methods. Patients with histological evidence of AH were identified from our database. The clinical and biochemical parameters were used to calculate the 5 different scores. The prognostic utility of these scores was determined by generating an ROC curve for survival at 30 days, 90 days, 6 months, and 1 year. Results and Conclusions. All 5 scores with the exception of CP score have a similar accuracy in predicting the short-term prognosis. However, they are uniformly poor in predicting longer-term survival with AUROC not exceeding 0.74. CP score is a very poor predictor of survival in both short and long term. Abstinence from alcohol was significantly (<0.05 associated with survival at 1 year.

  5. Parthenium dermatitis severity score to assess clinical severity of disease

    OpenAIRE

    Kaushal K Verma; Arika Bansal; Neetu Bhari; Gomathy Sethuraman

    2017-01-01

    Background: Parthenium dermatitis is the most common type of airborne contact dermatitis in India. It is a chronic disease of a remitting and relapsing course with significant morbidity and distress, but there is no scoring system to assess its severity. Aim: To design a scoring system for the assessment of clinical severity of disease in Parthenium dermatitis and to use this scoring system in various studies to determine its sensitivity, specificity, and reproducibility. Methods and Results:...

  6. Sampling time error in EuroSCORE II.

    Science.gov (United States)

    Poullis, Michael; Fabri, Brian; Pullan, Mark; Chalmers, John

    2012-05-01

    Seasonal variation in mortality after cardiac surgery exists. EuroSCORE II accrued data over a 12-week period from May to July 2010. We investigated whether the accrual period for EuroSCORE II had a different mortality rate compared with the rest of the year. We found in a study population of 18,706 that the accrual period of EuroSCORE II may introduce bias into the predicted mortality, potentially reducing the accuracy of the new model.

  7. The Alvarado score for predicting acute appendicitis: a systematic review

    Science.gov (United States)

    2011-01-01

    Background The Alvarado score can be used to stratify patients with symptoms of suspected appendicitis; the validity of the score in certain patient groups and at different cut points is still unclear. The aim of this study was to assess the discrimination (diagnostic accuracy) and calibration performance of the Alvarado score. Methods A systematic search of validation studies in Medline, Embase, DARE and The Cochrane library was performed up to April 2011. We assessed the diagnostic accuracy of the score at the two cut-off points: score of 5 (1 to 4 vs. 5 to 10) and score of 7 (1 to 6 vs. 7 to 10). Calibration was analysed across low (1 to 4), intermediate (5 to 6) and high (7 to 10) risk strata. The analysis focused on three sub-groups: men, women and children. Results Forty-two studies were included in the review. In terms of diagnostic accuracy, the cut-point of 5 was good at 'ruling out' admission for appendicitis (sensitivity 99% overall, 96% men, 99% woman, 99% children). At the cut-point of 7, recommended for 'ruling in' appendicitis and progression to surgery, the score performed poorly in each subgroup (specificity overall 81%, men 57%, woman 73%, children 76%). The Alvarado score is well calibrated in men across all risk strata (low RR 1.06, 95% CI 0.87 to 1.28; intermediate 1.09, 0.86 to 1.37 and high 1.02, 0.97 to 1.08). The score over-predicts the probability of appendicitis in children in the intermediate and high risk groups and in women across all risk strata. Conclusions The Alvarado score is a useful diagnostic 'rule out' score at a cut point of 5 for all patient groups. The score is well calibrated in men, inconsistent in children and over-predicts the probability of appendicitis in women across all strata of risk. PMID:22204638

  8. Beyond Statistics: The Economic Content of Risk Scores.

    Science.gov (United States)

    Einav, Liran; Finkelstein, Amy; Kluender, Raymond; Schrimpf, Paul

    2016-04-01

    "Big data" and statistical techniques to score potential transactions have transformed insurance and credit markets. In this paper, we observe that these widely-used statistical scores summarize a much richer heterogeneity, and may be endogenous to the context in which they get applied. We demonstrate this point empirically using data from Medicare Part D, showing that risk scores confound underlying health and endogenous spending response to insurance. We then illustrate theoretically that when individuals have heterogeneous behavioral responses to contracts, strategic incentives for cream skimming can still exist, even in the presence of "perfect" risk scoring under a given contract.

  9. Anticipating pulmonary complications after thoracotomy: the FLAM Score

    Directory of Open Access Journals (Sweden)

    Anziani Marylene

    2006-10-01

    Full Text Available Abstract Objective Pulmonary complications after thoracotomy are the result of progressive changes in the respiratory status of the patient. A multifactorial score (FLAM score was developed to identify postoperatively patients at higher risk for pulmonary complications at least 24 hours before the clinical diagnosis. Methods The FLAM score, created in 2002, is based on 7 parameters (dyspnea, chest X-ray, delivered oxygen, auscultation, cough, quality and quantity of bronchial secretions. To validate the FLAM score, we prospectively calculated scores during the first postoperative week in 300 consecutive patients submitted to posterolateral thoracotomy. Results During the study, 60 patients (20% developed pulmonary complications during the postoperative period. The FLAM score progressively increased in complicated patients until the fourth postoperative day (mean 13.5 ± 11.9. FLAM scores in patients with complications were significantly higher (p Conclusion Changes in FLAM score were evident at least 24 hours before the clinical diagnosis of pulmonary complications. FLAM score can be used to categorize patients according to risk of respiratory morbidity and mortality and could be a useful tool in the postoperative management of patients undergoing thoracotomy.

  10. A NOTE ON INCONSISTENCY OF THE SCORE TEST

    Directory of Open Access Journals (Sweden)

    Sumathi K

    2010-12-01

    Full Text Available The score test proposed by Rao (1947 has been widely used in the recent years for data analysis and model building because of its simplicity. However, at the time of its computation, it has been found that the value of the score test statistic becomes negative. Freedman (2007 discussed some of the theoretical reasons for this inconsistency of the score test and observed that the test was inconsistent when the observed Fisher information matrix was used rather than the expected Fisher information matrix. The present paper is an attempt to demonstrate the inconsistency of the score test in terms of the power function.

  11. Building a Scoring Model for Small and Medium Enterprises

    Directory of Open Access Journals (Sweden)

    Răzvan Constantin CARACOTA

    2010-09-01

    Full Text Available The purpose of the paper is to produce a scoring model for small and medium enterprises seeking financing through a bank loan. To analyze the loan application, scoring system developed for companies is as follows: scoring quantitative factors and scoring qualitative factors. We have estimated the probability of default using logistic regression. Regression coefficients determination was made with a solver in Excel using five ratios as input data. Analyses and simulations were conducted on a sample of 113 companies, all accepted for funding. Based on financial information obtained over two years, 2007 and 2008, we could establishe and appreciate the default value.

  12. Validation of the Danish version of Oxford Shoulder Score

    DEFF Research Database (Denmark)

    Frich, Lars Henrik; Noergaard, Peter Moensted; Brorson, Stig

    2011-01-01

    The Oxford Shoulder Score (OSS) is a patient-administered condition-specific questionnaire for patients with degenerative or inflammatory shoulder disease. The purpose of this study was to validate a Danish translation of the OSS and to compare it with the Constant Score (CS).......The Oxford Shoulder Score (OSS) is a patient-administered condition-specific questionnaire for patients with degenerative or inflammatory shoulder disease. The purpose of this study was to validate a Danish translation of the OSS and to compare it with the Constant Score (CS)....

  13. Prognostic scores in brain metastases from breast cancer

    Directory of Open Access Journals (Sweden)

    Astner Sabrina T

    2009-04-01

    Full Text Available Abstract Background Prognostic scores might be useful tools both in clinical practice and clinical trials, where they can be used as stratification parameter. The available scores for patients with brain metastases have never been tested specifically in patients with primary breast cancer. It is therefore unknown which score is most appropriate for these patients. Methods Five previously published prognostic scores were evaluated in a group of 83 patients with brain metastases from breast cancer. All patients had been treated with whole-brain radiotherapy with or without radiosurgery or surgical resection. In addition, it was tested whether the parameters that form the basis of these scores actually have a prognostic impact in this biologically distinct group of brain metastases patients. Results The scores that performed best were the recursive partitioning analysis (RPA classes and the score index for radiosurgery (SIR. However, disagreement between the parameters that form the basis of these scores and those that determine survival in the present group of patients and many reported data from the literature on brain metastases from breast cancer was found. With the four statistically significant prognostic factors identified here, a 3-tiered score can be created that performs slightly better than RPA and SIR. In addition, a 4-tiered score is also possible, which performs better than the three previous 4-tiered scores, incl. graded prognostic assessment (GPA score and basic score for brain metastases (BSBM. Conclusion A variety of prognostic models describe the survival of patients with brain metastases from breast cancer to a more or less satisfactory degree. However, the standard brain metastases scores might not fully appreciate the unique biology and time course of this disease, e.g., compared to lung cancer. It appears possible that inclusion of emerging prognostic factors will improve the results and allow for development and validation

  14. Addiction Severity Index (ASI) summary scores: comparison of the Recent Status Scores of the ASI-6 and the Composite Scores of the ASI-5.

    Science.gov (United States)

    Denis, Cécile M; Cacciola, John S; Alterman, Arthur I

    2013-01-01

    The characteristics and the validity of the Recent Status Scores (RSSs), the new summary scores generated by the sixth version of the Addiction Severity Index (ASI-6), are compared to the fifth version of the ASI summary scores, the Composite Scores (CSs). A sample of 82 randomly selected patients from substance abuse treatment programs were interviewed with the ASI-6, the ASI-5 and were administered a validity battery of questionnaires that included measures corresponding to each of the ASI domains. Each ASI-6 RSS was significantly correlated with its corresponding ASI-5 CS. The intercorrelations among the RSSs are low and none of these correlations were statistically different from the intercorrelations among CSs. In five of the seven areas, the ASI-6 RSSs were more highly correlated to the corresponding validity measures than were the ASI-5 CSs. The ASI-6 offers more comprehensive content in its scales than do those derived with earlier ASIs.

  15. Evaluation of a Comprehensive Delivery Room Neonatal Resuscitation and Adaptation Score (NRAS) Compared to the Apgar Score: A Pilot Study.

    Science.gov (United States)

    Jurdi, Shadi R; Jayaram, Archana; Sima, Adam P; Hendricks Muñoz, Karen D

    2015-01-01

    This study evaluated the interrater reliability and perceived importance of components of a developed neonatal adaption score, Neonatal Resuscitation Adaptation Score (NRAS), for evaluation of resuscitation need in the delivery room for extremely premature to term infants. Similar to the Apgar, the NRAS highest score was 10, but greater weight was given to respiratory and cardiovascular parameters. Evaluation of provider (N = 17) perception and scoring pattern was recorded for 5 clinical scenarios of gestational ages 23 to 40 weeks at 1 and 5 minutes and documenting NRAS and Apgar score. Providers assessed the tool twice within a 1-month interval. NRAS showed superior interrater reliability (P Apgar score. These findings identify an objective tool in resuscitation assessment of infants, especially those of smaller gestation age, allowing for greater discrimination of postbirth transition in the delivery room.

  16. Interobserver agreement of gleason score and modified gleason score in needle biopsy and in surgical specimen of prostate cancer

    Directory of Open Access Journals (Sweden)

    Sergio G. Veloso

    2007-10-01

    Full Text Available INTRODUCTION: Gleason score, which has a high interobserver variability, is used to classify prostate cancer. The most recent consensus valued the tertiary Gleason pattern and recommended its use in the final score of needle biopsies (modified Gleason score. This pattern is considered to be of high prognostic value in surgical specimens. This study emphasized the evaluation of the modified score agreement in needle biopsies and in surgical specimen, as well as the interobserver variability of this score MATERIALS AND METHODS: Three pathologists evaluated the slides of needle biopsies and surgical specimens of 110 patients, reporting primary, secondary and tertiary Gleason patterns and after that, traditional and modified Gleason scores were calculated. Kappa test (K assessed the interobserver agreement and the agreement between the traditional and modified scores of the biopsy and of the surgical specimen RESULTS: Interobserver agreement in the biopsy was K = 0.36 and K = 0.35, and in the surgical specimen it was K = 0.46 and K = 0.36, for the traditional and modified scores, respectively. The tertiary Gleason grade was found in 8%, 0% and 2% of the biopsies and in 8%, 0% and 13% of the surgical specimens, according to observers 1, 2 and 3, respectively. When evaluating the agreement of the traditional and modified Gleason scores in needle biopsy with both scores of the surgical specimen, a similar agreement was found through Kappa CONCLUSION: Contrary to what was expected, the modified Gleason score was not superior in the agreement between the biopsy score and the specimen, or in interobserver reproducibility, in this study.

  17. The APPLE Score – A Novel Score for the Prediction of Rhythm Outcomes after Repeat Catheter Ablation of Atrial Fibrillation

    Science.gov (United States)

    Kornej, Jelena; Hindricks, Gerhard; Arya, Arash; Sommer, Philipp; Husser, Daniela; Bollmann, Andreas

    2017-01-01

    Background Arrhythmia recurrences after catheter ablation occur in up to 50% within one year but their prediction remains challenging. Recently, we developed a novel score for the prediction of rhythm outcomes after single AF ablation demonstrating superiority to other scores. The current study was performed to 1) prove the predictive value of the APPLE score in patients undergoing repeat AF ablation and 2) compare it with the CHADS2 and CHA2DS2-VASc scores. Methods Rhythm outcome between 3–12 months after AF ablation were documented. The APPLE score (one point for Age >65 years, Persistent AF, imPaired eGFR (Leipzig Heart Center AF Ablation Registry (60±10 years, 65% male, 70% paroxysmal AF) undergoing repeat AF catheter ablation were included. Arrhythmia recurrences were observed in 133 patients (35%). While the CHADS2 (AUC 0.577, p = 0.037) and CHA2DS2-VASc scores (AUC 0.590, p = 0.015) demonstrated low predictive value, the APPLE score showed better prediction of arrhythmia recurrences (AUC 0.617, p = 0.002) than other scores (both p<0.001). Compared to patients with an APPLE score of 0, the risk (OR) for arrhythmia recurrences was 2.9, 3.0 and 6.0 (all p<0.01) for APPLE scores 1, 2, or ≥3, respectively. Conclusions The novel APPLE score is superior to the CHADS2 and CHA2DS2-VASc scores for prediction of rhythm outcomes after repeat AF catheter ablation. It may be helpful to identify patients with low, intermediate or high risk for recurrences after repeat procedure. PMID:28085921

  18. The Effect of Logical Choice Weight and Corrected Scoring Methods on Multiple Choice Agricultural Science Test Scores

    Directory of Open Access Journals (Sweden)

    B. K. Ajayi

    2012-12-01

    Full Text Available The study focused on the effect of logical choice weight and corrected scoring methods on multiple choice Agricultural science test scores the study also investigated the interaction effect of logical choice weight and corrected scoring methods in schools ,and types of school in multiple choice agricultural science test. The researcher used a combination of survey type and one short experimental design. The sample for the study consisted of 600 students selected by stratified random sampling techniques in south western Nigeria. Overall performance of students in percentage, and correlation was analyzed. The hypotheses were generated and tested at 0.05 level of significance. The study revealed that there was a significant difference in the academic performance of students in logical choice weight and corrected scoring methods in multiple choice agricultural science test scores. The result also shown that there was no interaction effect on the two scoring methods in the type of schools, the location of schools in multiple choices agricultural science test. The study revealed that logical choice weight scoring method was the best method that favoured the scoring of the students’ scripts in multiple choices agricultural science test. On the basis of these findings, logical choice weight should be introduced to the teachers to use in the classroom as a new method of scoring multiple choice agricultural science the logical choice weight method is recommended in the ministry of education, in Examination Division, and to junior secondary schools for scoring JSS (3 three multiple choice test. Examination bodies such as West Africa Examination Council (WAEC, National Examination Council (NECO, Joint Admission and Matriculation Board (JAMB should adopt the use of logical choice weight method in scoring multiple choice tests. The method could be used in tertiary institutions for post ‘JAMB’ Unify Matriculation Examination (UME test. It is also

  19. How to calculate an MMSE score from a MODA score (and vice versa) in patients with Alzheimer's disease.

    Science.gov (United States)

    Cazzaniga, R; Francescani, A; Saetti, C; Spinnler, H

    2003-11-01

    The aim of the present study was to provide a statistically sound way of reciprocally converting scores of the mini-mental state examination (MMSE) and the Milan overall dementia assessment (MODA). A consecutive series of 182 patients with "probable" Alzheimer's disease patients was examined with both tests. MODA and MMSE scores proved to be highly correlated. A formula for converting MODA and MMSE scores was generated.

  20. Field evaluation of broiler gait score using different sampling methods

    Directory of Open Access Journals (Sweden)

    AFS Cordeiro

    2009-09-01

    Full Text Available Brazil is today the world's largest broiler meat exporter; however, in order to keep this position, it must comply with welfare regulations while maintaining low production costs. Locomotion problems restrain bird movements, limiting their access to drinking and feeding equipment, and therefore their survival and productivity. The objective of this study was to evaluate locomotion deficiency in broiler chickens reared under stressful temperature conditions using three different sampling methods of birds from three different ages. The experiment consisted in determining the gait score of 28, 35, 42 and 49-day-old broilers using three different known gait scoring methods: M1, birds were randomly selected, enclosed in a circle, and then stimulated to walk out of the circle; M2, ten birds were randomly selected and gait scored; and M3, birds were randomly selected, enclosed in a circle, and then observed while walking away from the circle without stimulus to walking. Environmental temperature, relative humidity, and light intensity inside the poultry houses were recorded. No evidence of interaction between scoring method and age was found however, both method and age influenced gait score. Gait score was found to be lower at 28 days of age. The evaluation using the ten randomly selected birds within the house was the method that presented the less reliable results. Gait score results when birds were stimulated to walk were lower than when they were not simulated, independently of age. The gait scores obtained with the three tested methods and ages were higher than those considered acceptable. The highest frequency of normal gait score (0 represented 50% of the flock. These results may be related to heat stress during rearing. Average gait score incresead with average ambient temperature, relative humidity, and light intensity. The evaluation of gait score to detect locomotion problems of broilers under rearing conditions seems subjective and

  1. External validation of the HIT Expert Probability (HEP) score.

    Science.gov (United States)

    Joseph, Lee; Gomes, Marcelo P V; Al Solaiman, Firas; St John, Julie; Ozaki, Asuka; Raju, Manjunath; Dhariwal, Manoj; Kim, Esther S H

    2015-03-01

    The diagnosis of heparin-induced thrombocytopenia (HIT) can be challenging. The HIT Expert Probability (HEP) Score has recently been proposed to aid in the diagnosis of HIT. We sought to externally and prospectively validate the HEP score. We prospectively assessed pre-test probability of HIT for 51 consecutive patients referred to our Consultative Service for evaluation of possible HIT between August 1, 2012 and February 1, 2013. Two Vascular Medicine fellows independently applied the 4T and HEP scores for each patient. Two independent HIT expert adjudicators rendered a diagnosis of HIT likely or unlikely. The median (interquartile range) of 4T and HEP scores were 4.5 (3.0, 6.0) and 5 (3.0, 8.5), respectively. There were no significant differences between area under receiver-operating characteristic curves of 4T and HEP scores against the gold standard, confirmed HIT [defined as positive serotonin release assay and positive anti-PF4/heparin ELISA] (0.74 vs 0.73, p = 0.97). HEP score ≥ 2 was 100 % sensitive and 16 % specific for determining the presence of confirmed HIT while a 4T score > 3 was 93 % sensitive and 35 % specific. In conclusion, the HEP and 4T scores are excellent screening pre-test probability models for HIT, however, in this prospective validation study, test characteristics for the diagnosis of HIT based on confirmatory laboratory testing and expert opinion are similar. Given the complexity of the HEP scoring model compared to that of the 4T score, further validation of the HEP score is warranted prior to widespread clinical acceptance.

  2. A combinatorial scoring function for protein-RNA docking.

    Science.gov (United States)

    Zhang, Zhao; Lu, Lin; Zhang, Yue; Hua Li, Chun; Wang, Cun Xin; Zhang, Xiao Yi; Tan, Jian Jun

    2017-04-01

    Protein-RNA docking is still an open question. One of the main challenges is to develop an effective scoring function that can discriminate near-native structures from the incorrect ones. To solve the problem, we have constructed a knowledge-based residue-nucleotide pairwise potential with secondary structure information considered for nonribosomal protein-RNA docking. Here we developed a weighted combined scoring function RpveScore that consists of the pairwise potential and six physics-based energy terms. The weights were optimized using the multiple linear regression method by fitting the scoring function to L_rmsd for the bound docking decoys from Benchmark II. The scoring functions were tested on 35 unbound docking cases. The results show that the scoring function RpveScore including all terms performs best. Also RpveScore was compared with the statistical mechanics-based method derived potential ITScore-PR, and the united atom-based statistical potentials QUASI-RNP and DARS-RNP. The success rate of RpveScore is 71.6% for the top 1000 structures and the number of cases where a near-native structure is ranked in top 30 is 25 out of 35 cases. For 32 systems (91.4%), RpveScore can find the binding mode in top 5 that has no lower than 50% native interface residues on protein and nucleotides on RNA. Additionally, it was found that the long-range electrostatic attractive energy plays an important role in distinguishing near-native structures from the incorrect ones. This work can be helpful for the development of protein-RNA docking methods and for the understanding of protein-RNA interactions. RpveScore program is available to the public at http://life.bjut.edu.cn/kxyj/kycg/2017116/14845362285362368_1.html Proteins 2017; 85:741-752. © 2016 Wiley Periodicals, Inc.

  3. Worthing Physiological Score vs Revised Trauma Score in Outcome Prediction of Trauma patients; a Comparative Study

    Science.gov (United States)

    Nakhjavan-Shahraki, Babak; Yousefifard, Mahmoud; Hajighanbari, Mohammad Javad; Karimi, Parviz; Baikpour, Masoud; Mirzay Razaz, Jalaledin; Yaseri, Mehdi; Shahsavari, Kavous; Mahdizadeh, Fatemeh; Hosseini, Mostafa

    2017-01-01

    Introduction: Awareness about the outcome of trauma patients in the emergency department (ED) has become a topic of interest. Accordingly, the present study aimed to compare the rapid trauma score (RTS) and worthing physiological scoring system (WPSS) in predicting in-hospital mortality and poor outcome of trauma patients. Methods: In this comparative study trauma patients brought to five EDs in different cities of Iran during the year 2016 were included. After data collection, discriminatory power and calibration of the models were assessed and compared using STATA 11. Results: 2148 patients with the mean age of 39.50±17.27 years were included (75.56% males). The AUC of RTS and WPSS models for prediction of mortality were 0.86 (95% CI: 0.82-0.90) and 0.91 (95% CI: 0.87-0.94), respectively (p=0.006). RTS had a sensitivity of 71.54 (95% CI: 62.59-79.13) and a specificity of 97.38 (95% CI: 96.56-98.01) in prediction of mortality. These measures for the WPSS were 87.80 (95% CI: 80.38-92.78) and 83.45 (95% CI: 81.75-85.04), respectively. The AUC of RTS and WPSS in predicting poor outcome were 0.81 (95% CI: 0.77-0.85) and 0.89 (95% CI: 0.85-0.92), respectively (p<0.0001). Conclusion: The findings showed a higher prognostic value for the WPSS model in predicting mortality and severe disabilities in trauma patients compared to the RTS model. Both models had good overall performance in prediction of mortality and poor outcome. PMID:28286838

  4. Comparison between needle biopsy and radical prostatectomy samples in assessing Gleason score and modified Gleason score in prostatic adenocarcinomas

    Directory of Open Access Journals (Sweden)

    Banu DOĞAN GÜN

    2007-01-01

    Full Text Available Histologic grading is an important predictor of prostatic disease stage and prognosis. We aimed to assess the degree of concordance between pathologic characteristics of the specimens obtained from biopsy and radical prostatectomy materials.Gleason scores and modified Gleason scores calculated for 25 cases of prostatic adenocarcinoma from both needle biopsy and radical prostatectomy specimens were analyzed.Mean Gleason scores for biopsy and radical specimens were 6.4 (SD:±0.7 and 6.64 (SD:±1.3; and corresponding modified Gleason scores were 7.32 (SD:±1.43 and 7.32 (SD:±0.98, respectively. The Gleason scores of biopsy and radical prostatectomy specimens were identical in 48% (12/25 of the cases, while 32% (8/25 of the biopsy specimens were over-and 20% (5/25 of them were undergraded. While assessing modified Gleason scores, the exact degree of concordance of biopsy specimens with radical prostatectomy materials was 56% (14/25 and of the 11 (44% cases not correlated exactly, 6 (24% were over- and 5 (20% were undergraded. When the exact, over- and underestimated scores of Gleason and modified Gleason grading systems were compared statistically, no difference between two groups was seen (p>0.05. Overgrading errors were found to be more than undergrading errors for both of the scoring systems. Using either the modified Gleason or traditional Gleason scoring

  5. RIASEC Interest and Confidence Cutoff Scores: Implications for Career Counseling

    Science.gov (United States)

    Bonitz, Verena S.; Armstrong, Patrick Ian; Larson, Lisa M.

    2010-01-01

    One strategy commonly used to simplify the joint interpretation of interest and confidence inventories is the use of cutoff scores to classify individuals dichotomously as having high or low levels of confidence and interest, respectively. The present study examined the adequacy of cutoff scores currently recommended for the joint interpretation…

  6. Score normalization using logistic regression with expected parameters

    NARCIS (Netherlands)

    Aly, Robin

    2014-01-01

    State-of-the-art score normalization methods use generative models that rely on sometimes unrealistic assumptions. We propose a novel parameter estimation method for score normalization based on logistic regression. Experiments on the Gov2 and CluewebA collection indicate that our method is consiste

  7. Examining Exam Reviews: A Comparison of Exam Scores and Attitudes

    Science.gov (United States)

    Hackathorn, Jana; Cornell, Kathryn; Garczynski, Amy M.; Solomon, Erin D.; Blankmeyer, Katheryn E.; Tennial, Rachel E.

    2012-01-01

    Instructors commonly use exam reviews to help students prepare for exams and to increase student success. The current study compared the effects of traditional, trivia, and practice test-based exam reviews on actual exam scores, as well as students' attitudes toward each review. Findings suggested that students' exam scores were significantly…

  8. Evaluating the Predictive Validity of Graduate Management Admission Test Scores

    Science.gov (United States)

    Sireci, Stephen G.; Talento-Miller, Eileen

    2006-01-01

    Admissions data and first-year grade point average (GPA) data from 11 graduate management schools were analyzed to evaluate the predictive validity of Graduate Management Admission Test[R] (GMAT[R]) scores and the extent to which predictive validity held across sex and race/ethnicity. The results indicated GMAT verbal and quantitative scores had…

  9. Examining Classification Criteria: A Comparison of Three Cut Score Methods

    Science.gov (United States)

    DiStefano, Christine; Morgan, Grant

    2011-01-01

    This study compared 3 different methods of creating cut scores for a screening instrument, T scores, receiver operating characteristic curve (ROC) analysis, and the Rasch rating scale method (RSM), for use with the Behavioral and Emotional Screening System (BESS) Teacher Rating Scale for Children and Adolescents (Kamphaus & Reynolds, 2007).…

  10. Calcium score of small coronary calcifications on multidetector computed tomography

    DEFF Research Database (Denmark)

    Groen, J M; Kofoed, K F; Zacho, M;

    2013-01-01

    Multi detector computed tomography (MDCT) underestimates the coronary calcium score as compared to electron beam tomography (EBT). Therefore clinical risk stratification based on MDCT calcium scoring may be inaccurate. The aim of this study was to assess the feasibility of a new phantom which ena...

  11. Evaluating the Predictive Validity of Graduate Management Admission Test Scores

    Science.gov (United States)

    Sireci, Stephen G.; Talento-Miller, Eileen

    2006-01-01

    Admissions data and first-year grade point average (GPA) data from 11 graduate management schools were analyzed to evaluate the predictive validity of Graduate Management Admission Test[R] (GMAT[R]) scores and the extent to which predictive validity held across sex and race/ethnicity. The results indicated GMAT verbal and quantitative scores had…

  12. Nine equivalents of nursing manpower use score (NEMS)

    NARCIS (Netherlands)

    Miranda, DR; Moreno, R; Iapichino, G

    1997-01-01

    Objectives:To develop a simplified Therapeutic Intervention Scoring System (TISS) based on the TISS-28 items and to validate the new score in an independent dat abase. Design: Retrospective statistical analysis of a database and a prospective multicentre study. Setting: Development in the data base

  13. Automatically Scoring Short Essays for Content. CRESST Report 836

    Science.gov (United States)

    Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.

    2013-01-01

    The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…

  14. Various scoring systems for predicting mortality in Intensive Care Unit

    African Journals Online (AJOL)

    2015-12-07

    Dec 7, 2015 ... characteristic (ROC) curve was used to determine a cut‑off value for mortality and .... present study aimed to compare the third generation scoring systems .... Doganay Z. Scoring systems for intensive care unit. In: Şahinoğlu ...

  15. A signal-to-noise approach to score normalization

    NARCIS (Netherlands)

    Arampatzis, A.; Kamps, J.; Cheung, D.; Song, I.-Y.; Chu, W.; Hu, X.; Lin, J.; Li, J.; Peng, Z.

    2009-01-01

    Score normalization is indispensable in distributed retrieval and fusion or meta-search where merging of result-lists is required. Distributional approaches to score normalization with reference to relevance, such as binary mixture models like the normal-exponential, suffer from lack of universality

  16. Group differences in the heritability of items and test scores

    NARCIS (Netherlands)

    Wicherts, J.M.; Johnson, W.

    2009-01-01

    It is important to understand potential sources of group differences in the heritability of intelligence test scores. On the basis of a basic item response model we argue that heritabilities which are based on dichotomous item scores normally do not generalize from one sample to the next. If groups

  17. Clinical Outcome Scoring of Intra-articular Calcaneal Fractures

    NARCIS (Netherlands)

    T. Schepers (Tim); M.J. Heetveld (Martin); P.G.H. Mulder (Paul); P. Patka (Peter)

    2008-01-01

    textabstractOutcome reporting of intra-articular calcaneal fractures is inconsistent. This study aimed to identify the most cited outcome scores in the literature and to analyze their reliability and validity. A systematic literature search identified 34 different outcome scores. The most cited outc

  18. Advanced Issues in Propensity Scores: Longitudinal and Missing Data

    Science.gov (United States)

    Kupzyk, Kevin A.; Beal, Sarah J.

    2017-01-01

    In order to investigate causality in situations where random assignment is not possible, propensity scores can be used in regression adjustment, stratification, inverse-probability treatment weighting, or matching. The basic concepts behind propensity scores have been extensively described. When data are longitudinal or missing, the estimation and…

  19. The effect of anxiety and depression scores of couples who ...

    African Journals Online (AJOL)

    The effect of anxiety and depression scores of couples who underwent assisted ... using a semi-structured questionnaire and the Turkish version of the State-Trait Anxiety Inventory (STAI), and Beck .... tics (age, education, marriage history and infertility) of couples ..... however, for both groups, the mean trait anxiety scores.

  20. Estimating and Using Propensity Score Analysis with Complex Samples

    Science.gov (United States)

    Hahs-Vaughn, Debbie L.; Onwuegbuzi, Anthony J.

    2006-01-01

    Propensity score analysis is one statistical technique that can be applied to observational data to mimic randomization and thus can be used to estimate causal effects in studies in which the researchers have not applied randomization. In this article the authors (a) describe propensity score methodology and (b) demonstrate its application using…

  1. Longitudinal Factor Score Estimation Using the Kalman Filter.

    Science.gov (United States)

    Oud, Johan H.; And Others

    1990-01-01

    How longitudinal factor score estimation--the estimation of the evolution of factor scores for individual examinees over time--can profit from the Kalman filter technique is described. The Kalman estimates change more cautiously over time, have lower estimation error variances, and reproduce the LISREL program latent state correlations more…

  2. Score Normalization as a Fair Grading Practice. ERIC Digest.

    Science.gov (United States)

    Winters, R. Scott

    This Digest outlines an appropriate way to handle score normalization in a fair and equitable manner. Using raw scores to calculate final grades may not entirely capture a student's true performance within a class. As variation in performance evaluation increases, so does the impact on the student's final ranking. Ideally, the distribution of…

  3. Clinical Outcome Scoring of Intra-articular Calcaneal Fractures

    NARCIS (Netherlands)

    T. Schepers (Tim); M.J. Heetveld (Martin); P.G.H. Mulder (Paul); P. Patka (Peter)

    2008-01-01

    textabstractOutcome reporting of intra-articular calcaneal fractures is inconsistent. This study aimed to identify the most cited outcome scores in the literature and to analyze their reliability and validity. A systematic literature search identified 34 different outcome scores. The most cited

  4. Evaluation of a Lameness Scoring System for Dairy Cows

    DEFF Research Database (Denmark)

    Thomsen, P T; Munksgaard, L; Tøgersen, F A

    2008-01-01

    Lameness is a major problem in dairy production both in terms of reduced production and compromised animal welfare. A 5-point lameness scoring system was developed based on previously published systems, but optimized for use under field conditions. The scoring system included the words "in most...

  5. "New Balls, Please!"--The Prosody of Tennis Scores

    Science.gov (United States)

    Swerts, Marc; van Wijk, Carel

    2010-01-01

    Tennis scores represent a natural language domain that offers the unique opportunity to study the effects of discourse constraints on prosody with strict control over syntactic and lexical variation. This study analyzed a set of tennis scores, such as "30-15," from live recordings of several Wimbledon and Davis Cup matches. The objective was to…

  6. Discrepancy Score Reliabilities in the WAIS-IV Standardization Sample

    Science.gov (United States)

    Glass, Laura A.; Ryan, Joseph J.; Charter, Richard A.

    2010-01-01

    In the present investigation, the authors provide internal consistency reliabilities for Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtest and Index discrepancy scores using the standardization sample as the data source. Reliabilities ranged from 0.55 to 0.88 for subtest discrepancy scores and 0.80 to 0.91 for Index discrepancy…

  7. Bayesian Propensity Score Analysis: Simulation and Case Study

    Science.gov (United States)

    Kaplan, David; Chen, Cassie J. S.

    2011-01-01

    Propensity score analysis (PSA) has been used in a variety of settings, such as education, epidemiology, and sociology. Most typically, propensity score analysis has been implemented within the conventional frequentist perspective of statistics. This perspective, as is well known, does not account for uncertainty in either the parameters of the…

  8. Sex and Background Factors: Effect on ASAT Scores.

    Science.gov (United States)

    Adams, Raymond J.

    1985-01-01

    Data sets from Australia were analyzed using a causal model to determine the possible causes of sex differences in ASAT scores. Observed differences could be explained in terms of differences in students' English scores, the time the students spent studying mathematics, and their confidence in success. (Author/MLW)

  9. Validation of Walk Score for estimating access to walkable amenities.

    Science.gov (United States)

    Carr, Lucas J; Dunsiger, Shira I; Marcus, Bess H

    2011-11-01

    Proximity to walkable destinations or amenities is thought to influence physical activity behaviour. Previous efforts attempting to calculate neighbourhood walkability have relied on self-report or time-intensive and costly measures. Walk Score is a novel and publicly available website that estimates neighbourhood walkability based on proximity to 13 amenity categories (eg, grocery stores, coffee shops, restaurants, bars, movie theatres, schools, parks, libraries, book stores, fitness centres, drug stores, hardware stores, clothing/music stores). The purpose of this study is to test the validity and reliability of Walk Score for estimating access to objectively measured walkable amenities. Walk Scores of 379 residential/non-residential addresses in Rhode Island were manually calculated. Geographic information systems (GIS) was used to objectively measure 4194 walkable amenities in the 13 Walk Score categories. GIS data were aggregated from publicly available data sources. Sums of amenities within each category were matched to address data, and Pearson correlations were calculated between the category sums and address Walk Scores. Significant correlations were identified between Walk Score and all categories of aggregated walkable destinations within a 1-mile buffer of the 379 residential and non-residential addresses. Test-retest reliability correlation coefficients for a subsample of 100 addresses were 1.0. These results support Walk Score as a reliable and valid measure of estimating access to walkable amenities. Walk Score may be a convenient and inexpensive option for researchers interested in exploring the relationship between access to walkable amenities and health behaviours such as physical activity.

  10. BASIC Computer Scoring Program for the Leadership Scale for Sports.

    Science.gov (United States)

    Garland, Daniel J.

    This paper describes a computer scoring program, written in Commodore BASIC, that offers an efficient approach to the scoring of the Leadership Scale for Sports (LSS). The LSS measures: (1) the preferences of athletes for specific leader behaviors from the coach; (2) the perception of athletes regarding the actual leader behavior of their coach;…

  11. Discrepancy Score Reliabilities in the WAIS-IV Standardization Sample

    Science.gov (United States)

    Glass, Laura A.; Ryan, Joseph J.; Charter, Richard A.

    2010-01-01

    In the present investigation, the authors provide internal consistency reliabilities for Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtest and Index discrepancy scores using the standardization sample as the data source. Reliabilities ranged from 0.55 to 0.88 for subtest discrepancy scores and 0.80 to 0.91 for Index discrepancy…

  12. Evaluating Academic Journals Using Impact Factor and Local Citation Score

    Science.gov (United States)

    Chung, Hye-Kyung

    2007-01-01

    This study presents a method for journal collection evaluation using citation analysis. Cost-per-use (CPU) for each title is used to measure cost-effectiveness with higher CPU scores indicating cost-effective titles. Use data are based on the impact factor and locally collected citation score of each title and is compared to the cost of managing…

  13. Preference score of units in the presence of ordinal data

    Energy Technology Data Exchange (ETDEWEB)

    Jahanshahloo, G.R.; Soleimani-damaneh, M. [Department of Mathematics, Teacher Training University, Tehran (Iran, Islamic Republic of); Mostafaee, A. [Department of Mathematics, North-Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)], E-mail: mostafaee_m@yahoo.com

    2009-01-15

    This study deals with the ordinal data in the performance analysis framework and provides a weight-restricted DEA model to obtain the preference score of each unit under assessment. The obtained scores are used to rank DMUs. Furthermore, to decrease the complexity of the provided model, the number of the constraints is decreased by some linear transformations.

  14. External validation of the discharge of hip fracture patients score

    NARCIS (Netherlands)

    Vochteloo, Anne J. H.; Flikweert, Elvira R.; Tuinebreijer, Wim E.; Maier, Andrea B.; Bloem, Rolf M.; Pilot, Peter; Nelissen, Rob G. H. H.

    This paper reports the external validation of a recently developed instrument, the Discharge of Hip fracture Patients score (DHP) that predicts discharge location on admission in patients living in their own home prior to hip fracture surgery. The DHP (maximum score 100 points) was applied to 125

  15. A Human Capital Model of Educational Test Scores

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    measure of pure cognitive ability. We find that variables which are not closely associated with traditional notions of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture, attitudes...

  16. The Relative Influence of Faculty Mobility on NJ HSPA Scores

    Science.gov (United States)

    Graziano, Dana

    2013-01-01

    In this study, the researcher examined the strength and direction of relationships between New Jersey School Report Card Variables, in particular Faculty Mobility, and 2009-2010 New Jersey High School Proficiency Assessment (HSPA) Math and Language Arts Literacy test scores. Variables found to have an influence on standardized test scores in the…

  17. Managing missing scores on the Roland Morris Disability Questionnaire

    DEFF Research Database (Denmark)

    Lauridsen, Henrik Hein

    Background and purpose: It is likely that the most common method for calculating a Roland Morris Disability Index (RMDQ) sum score is to simply ignore any unanswered questions. In contrast, the raw sum score on the Oswestry Disability Index (ODI) is converted to a 0-100 scale, with the advantage...

  18. Comparison of simplified score with the revised original score for the diagnosis of autoimmune hepatitis: a new or a complementary diagnostic score?

    Science.gov (United States)

    Gatselis, Nikolaos K; Zachou, Kalliopi; Papamichalis, Panagiotis; Koukoulis, George K; Gabeta, Stella; Dalekos, George N; Rigopoulou, Eirini I

    2010-11-01

    The International Autoimmune Hepatitis Group developed a simplified score for autoimmune hepatitis. We assessed this "new scoring system" and compared it with the International Autoimmune Hepatitis Group original revised score. 502 patients were evaluated namely, 428 had liver diseases of various etiology [hepatitis B (n=109), hepatitis C (n=100), hepatitis D (n=4), alcoholic liver disease (n=28), non-alcoholic fatty liver disease (n=55), autoimmune cholestatic diseases (n=77), liver disorders of undefined origin (n=32) and miscellaneous hepatic disorders (n=23)], 13 had autoimmune hepatitis/overlap syndromes, 18 had autoimmune hepatitis/concurrent with other liver diseases and 43 had autoimmune hepatitis. The specificity of the simplified score was similar to that of the revised score (97% vs. 97.9%). The sensitivity in unmasking autoimmune hepatitis in autoimmune hepatitis/overlap syndromes was also similar in both systems (53.8% and 61.5%). However, the sensitivity for autoimmune hepatitis diagnosis in autoimmune hepatitis patients with concurrent liver disorders was lower by the new score (p=0.001). Liver biopsy proved to be the only independent factor for unmasking autoimmune hepatitis component among patients (p=0.003). The simplified score is a reliable and simple tool for excluding autoimmune hepatitis. However, both systems cannot unmask autoimmune hepatitis component efficiently in autoimmune hepatitis patients with concurrent autoimmune or non-autoimmune liver diseases. This study also strongly reiterates the importance of liver biopsy in the work-up of patients. Copyright © 2010 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  19. Score cards for standardized comparison of myocardial perfusion imaging reports

    DEFF Research Database (Denmark)

    Jensen, Julie D; Hoff, Camilla; Bouchelouche, Kirsten

    Background: When optimizing scan protocols or comparing modalities in myocardial perfusion imaging, it is necessary to compare the current method to the new method This can be achieved by a comparison based on hard numbers such as MBF, summed rest and stress scores, total perfusion deficit etc....... However, what is of importance to the patient is the total evaluation of these scores and the weight and confidence ascribed to each by the reporting physician. We suggest a standardized method summarizing the observations and the confidence of the physician in simple scores. We tested the developed score...... cards in a pilotproject using a training scenario where 3 observers with varying experience (1 month, 5 months and 3 years, respectively) scored static rest/stress Rb-82 PET scans. Method: 10 patients with known ischemic heart disease were included. Using the 17-segment AHA cardiac model, each patient...

  20. Evaluating Damage Potential in Security Risk Scoring Models

    Directory of Open Access Journals (Sweden)

    Eli Weintraub

    2016-05-01

    Full Text Available A Continuous Monitoring System (CMS model is presented, having new improved capabilities. The system is based on the actual real-time configuration of the system. Existing risk scoring models assume damage potential is estimated by systems' owner, thus rejecting the information relying in the technological configuration. The assumption underlying this research is based on users' ability to estimate business impacts relating to systems' external interfaces which they use regularly in their business activities, but are unable to assess business impacts relating to internal technological components. According to the proposed model systems' damage potential is calculated using technical information on systems' components using a directed graph. The graph is incorporated into the Common Vulnerability Scoring Systems' (CVSS algorithm to produce risk scoring measures. Framework presentation includes system design, damage potential scoring algorithm design and an illustration of scoring computations.