Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
LOD score exclusion analyses for candidate disease susceptibility genes using case-parents design
DENG Hongwen; GAO Guimin
2006-01-01
The focus of almost all the association studies of candidate genes is to test for their importance. We recently developed a LOD score approach that can be used to test against the importance of candidate genes for complex diseases and quantitative traits in random samples. As a complementary method to regular association analyses, our LOD score approach is powerful but still affected by the population admixture, though it is more conservative. To control the confounding effect of population heterogeneity, we develop here a LOD score exclusion analysis using case-parents design, the basic design of the transmission disequilibrium test (TDT) approach that is immune to population admixture. In the analysis, specific genetic effects and inheritance models at candidate genes can be analyzed and if a LOD score is ≤ - 2.0, the locus can be excluded from having an effect larger than that specified. Simulations show that this approach has reasonable power to exclude a candidate gene having small genetic effects if it is not a disease susceptibility locus (DSL) with sample size often employed in TDT studies. Similar to association analyses with the TDT in nuclear families, our exclusion analyses are generally not affected by population admixture. The exclusion analyses may be implemented to rule out candidate genes with no or minor genetic effects as supplemental analyses for the TDT. The utility of the approach is illustrated with an application to test the importance of vitamin D receptor (VDR) gene underlying the differential risk to osteoporosis.
Dube, M.P.; Kibar, Z.; Rouleau, G.A. [McGill Univ., Quebec (Canada)] [and others
1997-03-01
Hereditary spastic paraplegia (HSP) is a degenerative disorder of the motor system, defined by progressive weakness and spasticity of the lower limbs. HSP may be inherited as an autosomal dominant (AD), autosomal recessive, or an X-linked trait. AD HSP is genetically heterogeneous, and three loci have been identified so far: SPG3 maps to chromosome 14q, SPG4 to 2p, and SPG4a to 15q. We have undertaken linkage analysis with 21 uncomplicated AD families to the three AD HSP loci. We report significant linkage for three of our families to the SPG4 locus and exclude several families by multipoint linkage. We used linkage information from several different research teams to evaluate the statistical probability of linkage to the SPG4 locus for uncomplicated AD HSP families and established the critical LOD-score value necessary for confirmation of linkage to the SPG4 locus from Bayesian statistics. In addition, we calculated the empirical P-values for the LOD scores obtained with all families with computer simulation methods. Power to detect significant linkage, as well as type I error probabilities, were evaluated. This combined analytical approach permitted conclusive linkage analyses on small to medium-size families, under the restrictions of genetic heterogeneity. 19 refs., 1 fig., 1 tab.
Pragmatic Use of LOD - a Modular Approach
Treldal, Niels; Vestergaard, Flemming; Karlshøj, Jan
The concept of Level of Development (LOD) is a simple approach to specifying the requirements for the content of object-oriented models in a Building Information Modelling process. The concept has been implemented in many national and organization-specific variations and, in recent years, several...... and reliability of deliveries along with use-case-specific information requirements provides a pragmatic approach for a LOD concept. The proposed solution combines LOD requirement definitions with Information Delivery Manual-based use case requirements to match the specific needs identified for a LOD framework...
Maximum Potential Score (MPS: An operating model for a successful customer-focused strategy.
Cabello González, José Manuel
2015-11-01
Full Text Available One of marketers’ chief objectives is to achieve customer loyalty, which is a key factor for profitable growth. Therefore, they need to develop a strategy that attracts and maintains customers, giving them adequate motives, both tangible (prices and promotions and intangible (personalized service and treatment, to satisfy a customer and make him loyal to the company. Finding a way to accurately measure satisfaction and customer loyalty is very important. With regard to typical Relationship Marketing measures, we can consider listening to customers, which can help to achieve a competitive sustainable advantage. Customer satisfaction surveys are essential tools for listening to customers. Short questionnaires have gained considerable acceptance among marketers as a means to achieve a customer satisfaction measure. Our research provides an indication of the benefits of a short questionnaire (one/three questions. We find that the number of questions survey is significantly related to the participation in the survey (Net Promoter Score or NPS. We also prove that a the three question survey is more likely to have more participants than a traditional survey (Maximum Potential Score or MPS . Our main goal is to analyse one method as a potential predictor of customer loyalty. Using surveys, we attempt to empirically establish the causal factors in determining the satisfaction of customers. This paper describes a maximum potential operating model that captures with a three questions survey, important elements for a successful customer-focused strategy. MPS may give us lower participation rates than NPS but important information that helps to convert unhappy customers or just satisfied customers, into loyal customers.
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
LOD wars: The affected-sib-pair paradigm strikes back!
Farrall, M. [Wellcome Trust Centre for Human Genetics, Oxford (United Kingdom)
1997-03-01
In a recent letter, Greenberg et al. aired their concerns that the affected-sib-pair (ASP) approach was becoming excessively popular, owing to misconceptions and ignorance of the properties and limitations of both the ASP and the classic LOD-score approaches. As an enthusiast of using the ASP approach to map susceptibility genes for multifactorial traits, I would like to contribute a few comments and explanatory notes in defense of the ASP paradigm. 18 refs.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Atmospheric and Oceanic Excitations to LOD Change on Quasi-biennial Time Scales
Li-Hua Ma; De-Chun Liao; Yan-Ben Han
2006-01-01
We use wavelet transform to study the time series of the Earth's rotation rate (length-of-day, LOD), the axial components of atmospheric angular momentum (AAM) and oceanic angular momentum (OAM) in the period 1962-2005, and discuss the quasi-biennial oscillations (QBO) of LOD change. The results show that the QBO of LOD change varies remarkably in amplitude and phase. It was weak before 1978, then became much stronger and reached maximum values during the strong El Nino events in around 1983 and 1997. Results from analyzing the axial AAM indicate that the QBO signals in axial AAM are extremely consistent with the QBOs of LOD change. During 1963-2003, the QBO variance in the axial AAM can explain about 99.0% of that of the LOD, in other words, all QBO signals of LOD change are almost excited by the axial AAM, while the weak QBO signals of the axial OAM are quite different from those of the LOD and the axial AAM in both time-dependent characteristics and magnitudes. The combined effects of the axial AAM and OAM can explain about 99.1% of the variance of QBO in LOD change during this period.
LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance
Ellul, C.; Altenbuchner, J.
2013-09-01
The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.
3D Urban Visualization with LOD Techniques
无
2006-01-01
In 3D urban visualization, large data volumes related to buildings are a major factor that limits the delivery and browsing speed in a web-based computer system. This paper proposes a new approach based on the level of detail (LOD) technique advanced in 3D visualization in computer graphics. The key idea of LOD technique is to generalize details of object surfaces without losing details for delivery and displaying objects. This technique has been successfully used in visualizing one or a few multiple objects in films and other industries. However, applying the technique to 3D urban visualization requires an effective generalization method for urban buildings. Conventional two-dimensional (2D) generalization method at different scales provides a good generalization reference for 3D urban visualization. Yet, it is difficult to determine when and where to retrieve data for displaying buildings. To solve this problem, this paper defines an imaging scale point and image scale region for judging when and where to get the right data for visualization. The results show that the average response time of view transformations is much decreased.
Mazhar A. Memon
2016-04-01
Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.
Turki, Turki; Roshan, Usman
2014-11-15
Programs based on hash tables and Burrows-Wheeler are very fast for mapping short reads to genomes but have low accuracy in the presence of mismatches and gaps. Such reads can be aligned accurately with the Smith-Waterman algorithm but it can take hours and days to map millions of reads even for bacteria genomes. We introduce a GPU program called MaxSSmap with the aim of achieving comparable accuracy to Smith-Waterman but with faster runtimes. Similar to most programs MaxSSmap identifies a local region of the genome followed by exact alignment. Instead of using hash tables or Burrows-Wheeler in the first part, MaxSSmap calculates maximum scoring subsequence score between the read and disjoint fragments of the genome in parallel on a GPU and selects the highest scoring fragment for exact alignment. We evaluate MaxSSmap's accuracy and runtime when mapping simulated Illumina E.coli and human chromosome one reads of different lengths and 10% to 30% mismatches with gaps to the E.coli genome and human chromosome one. We also demonstrate applications on real data by mapping ancient horse DNA reads to modern genomes and unmapped paired reads from NA12878 in 1000 genomes. We show that MaxSSmap attains comparable high accuracy and low error to fast Smith-Waterman programs yet has much lower runtimes. We show that MaxSSmap can map reads rejected by BWA and NextGenMap with high accuracy and low error much faster than if Smith-Waterman were used. On short read lengths of 36 and 51 both MaxSSmap and Smith-Waterman have lower accuracy compared to at higher lengths. On real data MaxSSmap produces many alignments with high score and mapping quality that are not given by NextGenMap and BWA. The MaxSSmap source code in CUDA and OpenCL is freely available from http://www.cs.njit.edu/usman/MaxSSmap.
An Incremental LOD Method Based on Grid and Its Application in Distributed Terrain Visualization
MA Zhaoting; LI Chengming; PAN Mao
2005-01-01
Incremental LOD can be transmitted on the network as a stream, then users on the clients can easily catch the skeleton of terrain without downloading all the data from the server.Detailed information in a local part can be added gradually when users zoom it in without redundant data transmission in this procedure.To do this, an incremental LOD method is put forward according to the regular arrangement of grid.This method applies arbitrary sized grid terrains and is not restricted to square ones with a side measuring 2 k + 1 samples.Maximum height errors are recorded when the LOD is preprocessed and it can be visualized with the geometrical Mipmaps to reduce the screen error.
Enhanced LOD Concepts for Virtual 3d City Models
Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.
2013-09-01
Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.
Distribution of Errors Reported by LOD2 LODStats Project [Dataset
Hoekstra, R.; Groth, P.
2013-01-01
These files can be used to plot a distribution of error types based on the LOD2 LODStats analysis of linked data published through the datahub.io. The statistics show that many errors reported in these statistics are the result of HTTP problems (40x and 50x codes) unknown responses and connection do
LOD First Estimates In 7406 SLR San Juan Argentina Station
Pacheco, A.; Podestá, R.; Yin, Z.; Adarvez, S.; Liu, W.; Zhao, L.; Alvis Rojas, H.; Actis, E.; Quinteros, J.; Alacoria, J.
2015-10-01
In this paper we show results derived from satellite observations at the San Juan SLR station of Felix Aguilar Astronomical Observatory (OAFA). The Satellite Laser Ranging (SLR) telescope was installed in early 2006, in accordance with an international cooperation agreement between the San Juan National University (UNSJ) and the Chinese Academy of Sciences (CAS). The SLR has been in successful operation since 2011 using NAOC SLR software for the data processing. This program was designed to calculate satellite orbits and station coordinates, however it was used in this work for the determination of LOD (Length Of Day) time series and Earth Rotation speed.
Relation Between Equatorial Oceanic Activities and LOD Changes
郑大伟; 陈刚
1994-01-01
The time series of the length of day (LOD) and the observational Pacific sea level during l962.0-1990.0 are used to study the relation between Earth rotation and equatorial oceanic activities.The results show that (i) the sea level is apparently rising at an average rate of about 1.75±.01mm/a during the past 30 years,(ii) there are large-scale eastward and westward water motions in the upper equatorial Pacific zone,which,according to the dynamical analysis of the angular momentum of the large-scale sea water motion in Pacific Ocean related to the Earth rotation axis accounts for about 30% of the change in ititerannual Eatlh rotation rate; (iii) the interannual changes in Earth rotation also cause changes in the distribution of the water mass in equatorial Pacific,and affect the formation of ENSO events.Based on these results,we give a new model for the interaction between equatorial ocean and Earth rotation.
Automatic repair of CityGML LOD2 buildings using shrink-wrapping
Zhao, Z.; Ledoux, H.; Stoter, J.E.
2013-01-01
The LoD2 building models defined in CityGML are widely used in 3D city applications. The underlying geometry for such models is a GML solid (without interior shells), whose boundary should be a closed 2-manifold. However, this condition is often violated in practice because of the way LoD2 models ar
The putative old, nearby cluster Lod\\'{e}n 1 does not exist
Han, Eunkyu; Wright, Jason T
2016-01-01
Astronomers have access to precious few nearby, middle-aged benchmark star clusters. Within 500 pc, there are only NGC 752 and Ruprecht 147 (R147), at 1.5 and 3 Gyr respectively. The Database for Galactic Open Clusters (WEBDA) also lists Lod\\'{e}n 1 as a 2 Gyr cluster at a distance of 360 pc. If this is true, Lod\\'{e}n 1 could become a useful benchmark cluster. This work details our investigation of Lod\\'{e}n 1. We assembled archival astrometry (PPMXL) and photometry (2MASS, Tycho-2, APASS), and acquired medium resolution spectra for radial velocity measurements with the Robert Stobie Spectrograph (RSS) at the Southern African Large Telescope. We observed no sign of a cluster main-sequence turnoff or red giant branch amongst all stars in the field brighter than $J < 11$. Considering the 29 stars identified by L.O. Lod\\'{e}n and listed on SIMBAD as the members of Lod\\'{e}n 1, we found no compelling evidence of kinematic clustering in proper motion or radial velocity. Most of these candidates are A stars and...
Linked open data creating knowledge out of interlinked data : results of the LOD2 project
Bryl, Volha; Tramp, Sebastian
2014-01-01
Linked Open Data (LOD) is a pragmatic approach for realizing the Semantic Web vision of making the Web a global, distributed, semantics-based information system. This book presents an overview on the results of the research project “LOD2 -- Creating Knowledge out of Interlinked Data”. LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Program. Commencing in September 2010, this 4-year project comprised leading Linked Open Data research groups, companies, and service providers from across 11 European countries and South Korea. The aim of this project was to advance the state-of-the-art in research and development in four key areas relevant for Linked Data, namely 1. RDF data management; 2. the extraction, creation, and enrichment of structured RDF data; 3. the interlinking and fusion of Linked Data from different sources and 4. the authoring, exploration and visualization of Linked Data.
Improving the consistency of multi-LOD CityGML datasets by removing redundancy
Biljecki, F.; Ledoux, H.; Stoter, J.E.
2014-01-01
The CityGML standard enables the modelling of some topological relationships, and the representation in multiple levels of detail (LODs). However, both concepts are rarely utilised in reality. In this paper we investigate the linking of corresponding geometric features across multiple representation
Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes
Muhammad Hussain
2009-01-01
Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.
ANIMATION STRATEGIES FOR SMOOTH TRANSFORMATIONS BETWEEN DISCRETE LODS OF 3D BUILDING MODELS
M. Kada
2016-06-01
Full Text Available The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.
Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models
Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias
2016-06-01
The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.
Highly sensitive lactate biosensor by engineering chitosan/PVI-Os/CNT/LOD network nanocomposite.
Cui, Xiaoqiang; Li, Chang Ming; Zang, Jianfeng; Yu, Shucong
2007-06-15
A novel chitosan/PVI-Os(polyvinylimidazole-Os)/CNT(carbon nanotube)/LOD (lactate oxidase) network nanocomposite was constructed on gold electrode for detection of lactate. The composite was nanoengineered by selected matched material components and optimized composition ratio to produce a superior lactate sensor. Positively charged chitosan and PVI-Os were used as the matrix and the mediator to immobilize the negatively charged LOD and to enhance the electron transfer, respectively. CNTs were introduced as the essential component in the composite for the network nanostructure. FESEM (field emission scan electron microscopy) and electrochemical characterization demonstrated that CNT behaved as a cross-linker to network PVI and chitosan due to its nanoscaled and negative charged nature. This significantly improved the conductivity, stability and electroactivity for detection of lactate. The standard deviation of the sensor without CNT in the composite was greatly reduced from 19.6 to 4.9% by addition of CNTs. With optimized conditions the sensitivity and detection limit of the lactate sensor was 19.7 microA mM(-1)cm(-2) and 5 microM, respectively. The sensitivity was remarkably improved in comparison to the newly reported values of 0.15-3.85 microA mM(-1)cm(-2). This novel nanoengineering approach for selecting matched components to form a network nanostructure could be extended to other enzyme biosensors, and to have broad potential applications in diagnostics, life science and food analysis.
CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds
Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol
The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.
Singularity Processing Method of Microstrip Line Edge Based on LOD-FDTD
Lei Li
2014-01-01
Full Text Available In order to improve the performance of the accuracy and efficiency for analyzing the microstrip structure, a singularity processing method is proposed theoretically and experimentally based on the fundamental locally one-dimensional finite difference time domain (LOD-FDTD with second-order temporal accuracy (denoted as FLOD2-FDTD. The proposed method can highly improve the performance of the FLOD2-FDTD even when the conductor is embedded into more than half of the cell by the coordinate transformation. The experimental results showed that the proposed method can achieve higher accuracy when the time step size is less than or equal to 5 times of that the Courant-Friedrich-Levy (CFL condition allowed. In comparison with the previously reported methods, the proposed method for calculating electromagnetic field near microstrip line edge not only improves the efficiency, but also can provide a higher accuracy.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
Bisheng Yang
2016-12-01
Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.
Aggregation of LoD 1 building models as an optimization problem
Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.
3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.
3D Building Modeling in LoD2 Using the CityGML Standard
Preka, D.; Doulamis, A.
2016-10-01
Over the last decade, scientific research has been increasingly focused on the third dimension in all fields and especially in sciences related to geographic information, the visualization of natural phenomena and the visualization of the complex urban reality. The field of 3D visualization has achieved rapid development and dynamic progress, especially in urban applications, while the technical restrictions on the use of 3D information tend to subside due to advancements in technology. A variety of 3D modeling techniques and standards has already been developed, as they gain more traction in a wide range of applications. Such a modern standard is the CityGML, which is open and allows for sharing and exchanging of 3D city models. Within the scope of this study, key issues for the 3D modeling of spatial objects and cities are considered and specifically the key elements and abilities of CityGML standard, which is used in order to produce a 3D model of 14 buildings that constitute a block at the municipality of Kaisariani, Athens, in Level of Detail 2 (LoD2), as well as the corresponding relational database. The proposed tool is based upon the 3DCityDB package in tandem with a geospatial database (PostgreSQL w/ PostGIS 2.0 extension). The latter allows for execution of complex queries regarding the spatial distribution of data. The system is implemented in order to facilitate a real-life scenario in a suburb of Athens.
Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.
Petrovic, Vid; Fallon, James; Kuester, Falko
2007-01-01
Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.
GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY
F. Biljecki
2016-09-01
Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.
Generation of Multi-Lod 3d City Models in Citygml with the Procedural Modelling Engine RANDOM3DCITY
Biljecki, F.; Ledoux, H.; Stoter, J.
2016-09-01
The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is - as we discuss in this paper - well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at github.com/tudelft3d/Random3Dcity"target="_blank">http://github.com/tudelft3d/Random3Dcity.
Jakub Prokop
2011-09-01
Full Text Available Three new palaeopteran insects are described from the Middle Permian (Guadalupian of Salagou Formation in the Lodève Basin (South of France, viz. the diaphanopterodean Alexrasnitsyniidae fam. n., based on Alexrasnitsynia permiana gen. et sp. n., the Parelmoidae Permelmoa magnifica gen. et sp. n., and Lodevohymen lapeyriei gen. et sp. n. (in Megasecoptera or Diaphanopterodea, family undetermined. In addition the first record of mayflies attributed to family Syntonopteridae (Ephemeroptera is reported. These new fossils clearly demonstrate that the present knowledge of the Permian insects remains very incomplete. They also confirm that the Lodève entomofauna was highly diverse providing links to other Permian localities and also rather unique, with several families still not recorded in other contemporaneous outcrops.
... this page: //medlineplus.gov/ency/article/003402.htm Apgar score To use the sharing features on this page, ... birth. Virginia Apgar, MD (1909-1974) introduced the Apgar score in 1952. How the Test is Performed The ...
... Stages Listen Español Text Size Email Print Share Apgar Scores Page Content Article Body As soon as your ... the syringe, but is blue; her one minute Apgar score would be 8—two points off because she ...
Combinación de Valores de Longitud del Día (LOD) según ventanas de frecuencia
Fernández, L. I.; Arias, E. F.; Gambis, D.
El concepto de solución combinada se sustenta en el hecho de que las diferentes series temporales de datos derivadas a partir de distintas técnicas de la Geodesia Espacial son muy disimiles entre si. Las principales diferencias, fácilmente detectables, entre las distintas series son: diferente intervalo de muestreo, extensión temporal y calidad. Los datos cubren un período reciente de 27 meses (julio 96-oct. 98). Se utilizaron estimaciones de la longitud del día (LOD) originadas en 10 centros operativos del IERS (International Earth Rotation Service) a partir de las técnicas GPS (Global Positioning System) y SLR (Satellite Laser Ranging). La serie temporal combinada así obtenida se comparó con la solución EOP (Parámetros de la Orientación Terrestre) combinada multi-técnica derivada por el IERS (C04). El comportamiento del ruido en LOD para todas las técnicas mostró ser dependiente de la frecuencia (Vondrak, 1998). Por esto, las series dato se dividieron en ventanas de frecuencia, luego de haberles removido bies y tendencias. Luego, se asignaron diferentes factores de peso a cada ventana discriminando por técnicas. Finalmente estas soluciones parcialmente combinadas se mezclaron para obtener la solución combinada final. Sabemos que la mejor solución combinada tendrá una precisión menor que la precisión de las series temporales de datos que la originaron. Aun así, la importancia de una serie combinada confiable de EOP, esto es, de una precisión aceptable y libre de sistematismos evidentes, radica en la necesidad de una base de datos EOP de referencia para el estudio de fenómenos geofísicos que motivan variaciones en la rotación terrestre.
Using Parameters of Dynamic Pulse Function for 3d Modeling in LOD3 Based on Random Textures
Alizadehashrafi, B.
2015-12-01
The pulse function (PF) is a technique based on procedural preprocessing system to generate a computerized virtual photo of the façade with in a fixed size square(Alizadehashrafi et al., 2009, Musliman et al., 2010). Dynamic Pulse Function (DPF) is an enhanced version of PF which can create the final photo, proportional to real geometry. This can avoid distortion while projecting the computerized photo on the generated 3D model(Alizadehashrafi and Rahman, 2013). The challenging issue that might be handled for having 3D model in LoD3 rather than LOD2, is the final aim that have been achieved in this paper. In the technique based DPF the geometries of the windows and doors are saved in an XML file schema which does not have any connections with the 3D model in LoD2 and CityGML format. In this research the parameters of Dynamic Pulse Functions are utilized via Ruby programming language in SketchUp Trimble to generate (exact position and deepness) the windows and doors automatically in LoD3 based on the same concept of DPF. The advantage of this technique is automatic generation of huge number of similar geometries e.g. windows by utilizing parameters of DPF along with defining entities and window layers. In case of converting the SKP file to CityGML via FME software or CityGML plugins the 3D model contains the semantic database about the entities and window layers which can connect the CityGML to MySQL(Alizadehashrafi and Baig, 2014). The concept behind DPF, is to use logical operations to project the texture on the background image which is dynamically proportional to real geometry. The process of projection is based on two vertical and horizontal dynamic pulses starting from upper-left corner of the background wall in down and right directions respectively based on image coordinate system. The logical one/zero on the intersections of two vertical and horizontal dynamic pulses projects/does not project the texture on the background image. It is possible to define
Fabián, Z. (Zdeněk)
2010-01-01
In this paper, we study a distribution-dependent correlation coefficient based on the concept of scalar score. This new measure of association of continuous random variables is compared by means of simulation experiments with the Pearson, Kendall and Spearman correlation coefficients.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Fai, S.; Rafeiro, J.
2014-05-01
In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
基于四叉树的动态LOD虚拟地形优化%Optimized Design for Dynamic LOD Virtual Terrain Using Quad Tree
邹承明; 李引; 陆苑; 陈金锐
2009-01-01
In this paper, we present a novel approach of optimized design for dynamic LOD virtual terrain based on quad tree. In the process of building quad tree which on the basis of multi-resolution terrain model, we firstly optimize the quad tree of a mesh respectively according to the three criteria of the quad tree, that is bounding volume omitting, face culling and projection error analysis, Then we set up a suitable node evaluation function, according to the different degrees of details to show the viewpoint moves when we are roaming the LOD terrain. To remove the unreasonable quad tree partition, we put forward an appropriate cracks elimination based on the law of segmentation and rendering in the process of LOD mode simplification. Finally after the partition of the nodes optimally, we achieve the goal of optimizing the quad tree mesh.%提出了一种基于四叉树的动态LOD优化方法.基于多分辨率地形模型的四叉树构建过程,首先针对四叉树优化采用的3个判断标准--包围体剔除、背面剔除和屏幕投影误差分别进行相应的网格优化,然后在LOD地形中漫游时,随着视点移动而呈现出的不同细节程度的需要,建立了合适的节点评价函数,并在LOD简化过程中根据节点分割和渲染的规律,提出合适的裂缝消除算法,去掉四叉树中不合理的分割,最后使得分割后的节点达到最优以完成四叉树网格优化的目的.
Meijer, Rob R.
2003-01-01
This book discusses how to obtain test scores and, in particular, how to obtain test scores from tests that consist of a combination of multiple choice and open-ended questions. The strength of the book is that scoring solutions are presented for a diversity of real world scoring problems. (SLD)
Lin, Miao-Hsiang; Hsiung, Chao A.
1994-01-01
Two simple empirical approximate Bayes estimators are introduced for estimating domain scores under binomial and hypergeometric distributions respectively. Criteria are established regarding use of these functions over maximum likelihood estimation counterparts. (SLD)
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Subgroup Balancing Propensity Score
DONG, JING; Zhang, Junni L; Li, Fan
2017-01-01
We investigate the estimation of subgroup treatment effects with observational data. Existing propensity score matching and weighting methods are mostly developed for estimating overall treatment effect. Although the true propensity score should balance covariates for the subgroup populations, the estimated propensity score may not balance covariates for the subgroup samples. We propose the subgroup balancing propensity score (SBPS) method, which selects, for each subgroup, to use either the ...
基于LOD的自适应无裂缝地形渲染%Adaptive terrain rendering with no T-adjacent based on LOD
郭虎奇; 费向东; 刘小玲
2013-01-01
提出了一种新型三角形簇作为GPU的图元绘制单元,结合LOD技术实现了自适应的无裂缝地形渲染.该三角形簇,称为N-簇,分为8种基本类型,不同尺寸和位置的地形网格块都可以通过这8种基本类型进行缩放和平移得到.采用二叉树数据结构组织N-簇,每个二叉树节点对应一种N-簇,同时存储了N-簇的缩放及平移.结合八边形误差算法进行场景LOD的构建,避免了不同LOD层次间过滤产生的T-连接.由于大规模地形的高程数据量及纹理数据量非常庞大,不能一次性载入内存,采用四叉树数据结构分块组织高程数据和纹理数据,在程序运行时进行数据块的动态加载.实验结果表明,N-簇提高了地形三角形网格的绘制效率,同时,整个算法能自适应地进行无裂缝地形渲染,并能满足大规模地形场景实时绘制的要求.%A new kind of triangle cluster, as the render unit of GPU is proposed, combined the LOD technology, which realizes the adaptive terrain rendering with no crack. The new kind of triangle cluster, called N-cluster, has eight base types and the terrain mesh with different size and location can translated from the base types with scaling and translating. Binary tree is used to organize N-cluster, each node contains the information of N-cluster, including type, scale and translation. Octagon metric is utilized to construct LOD of terrain, which can avoid the T-adjacent between different LOD. Because of the massive data of DEM and texture data, which cannot be loaded into memory once, the quad tree is used to organize them and the data mesh is loaded into memory dynamically when running. The experimental result shows that, N-cluster improves the efficiency of terrain rendering, and the total algorithm can adaptively rendering terrain without crack, which can also meet the requirement of real-time rendering of large-scale terrain.
2015-10-01
The Apgar score provides an accepted and convenient method for reporting the status of the newborn infant immediately after birth and the response to resuscitation if needed. The Apgar score alone cannot be considered as evidence of, or a consequence of, asphyxia; does not predict individual neonatal mortality or neurologic outcome; and should not be used for that purpose. An Apgar score assigned during resuscitation is not equivalent to a score assigned to a spontaneously breathing infant. The American Academy of Pediatrics and the American College of Obstetricians and Gynecologists encourage use of an expanded Apgar score reporting form that accounts for concurrent resuscitative interventions.
SLACK, CHARLES W.
REINFORCEMENT AND ROLE-REVERSAL TECHNIQUES ARE USED IN THE SCORE PROJECT, A LOW-COST PROGRAM OF DELINQUENCY PREVENTION FOR HARD-CORE TEENAGE STREET CORNER BOYS. COMMITTED TO THE BELIEF THAT THE BOYS HAVE THE POTENTIAL FOR ETHICAL BEHAVIOR, THE SCORE WORKER FOLLOWS B.F. SKINNER'S THEORY OF OPERANT CONDITIONING AND REINFORCES THE DELINQUENT'S GOOD…
Application of LOD Technology in Groundwater Finite Element Post-processing%LOD技术在地下水有限元后处理中的应用
毕振波; 郑爱勤; 崔振东
2011-01-01
地下水有限元后处理阶段的数据量较大,这对模开重现、网络快速传输和计算结果的实时可视化造成困难.为此,分析地下水有限元后处理中面临的主要问题和LOD技术,指出顶点元素删除法是一种适应地下水有限元后处理的有效数据模型简化方法,设计顶点删除和恢复过程中的主要数据结构,并应用DT方法对“空洞”进行局部三角剖分.实例证明该方法在地下水有限元后处理中应用的有效性.%It is difficult for the reproducing, the rapid transmission based on network and the real time visualization of calculated results of finite element models that there is a large amount of data in finite element post-processing stage in groundwater. Therefore, based on analyzing the main problems existing in the post-processing of finite element in groundwater and Level of Detail(LOD) technology, this paper argues that vertex element deletion method is a effective data model simplification techniques adapted for finite element post-processing in groundwater, the main data structures of vertex element deletion and restoring are designed, DT method is used for the local triangulation meshing in the "hole" area. The real implemented example proves the effectiveness of vertex element deletion algorithm applied to finite element post-processing in groundwater.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Rudolf, Frauke; Joaquim, Luis Carlos; Vieira, Cesaltina
2013-01-01
Background: This study was carried out in Guinea-Bissau ’ s capital Bissau among inpatients and outpatients attending for tuberculosis (TB) treatment within the study area of the Bandim Health Project, a Health and Demographic Surveillance Site. Our aim was to assess the variability between 2...... physicians in performing the Bandim tuberculosis score (TBscore), a clinical severity score for pulmonary TB (PTB), and to compare it to the Karnofsky performance score (KPS). Method : From December 2008 to July 2009 we assessed the TBscore and the KPS of 100 PTB patients at inclusion in the TB cohort and...
Reporting Valid and Reliable Overall Scores and Domain Scores
Yao, Lihua
2010-01-01
In educational assessment, overall scores obtained by simply averaging a number of domain scores are sometimes reported. However, simply averaging the domain scores ignores the fact that different domains have different score points, that scores from those domains are related, and that at different score points the relationship between overall…
Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun
2002-01-01
The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)
Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun
2002-01-01
The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Shinn, Maxwell
2013-01-01
Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Instant MuseScore is written in an easy-to follow format, packed with illustrations that will help you get started with this music composition software.This book is for musicians who would like to learn how to notate music digitally with MuseScore. Readers should already have some knowledge about musical terminology; however, no prior experience with music notation software is necessary.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
van de Gronde, Jasper J.; Azzopardi, George; Petkov, Nicolai
2015-01-01
Orientation scores are representations of images built using filters that only select on orientation (and not on the magnitude of the frequency). Importantly, they allow (easy) reconstruction, making them ideal for use in a filtering pipeline. Traditionally a specific set of orientations has to be c
We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.
Miranda, DR; Nap, R; de Rijk, A; Schaufeli, W; Lapichino, G
Objectives. The instruments used for measuring nursing workload in the intensive care unit (e.g., Therapeutic Intervention Scoring System-28) are based on therapeutic interventions related to severity of illness. Many nursing activities are not necessarily related to severity of illness, and
External validation of the discharge of hip fracture patients score
Vochteloo, Anne J. H.; Flikweert, Elvira R.; Tuinebreijer, Wim E.; Maier, Andrea B.; Bloem, Rolf M.; Pilot, Peter; Nelissen, Rob G. H. H.
This paper reports the external validation of a recently developed instrument, the Discharge of Hip fracture Patients score (DHP) that predicts discharge location on admission in patients living in their own home prior to hip fracture surgery. The DHP (maximum score 100 points) was applied to 125
Semire DIKLI
2006-01-01
Full Text Available Automated Essay Scoring Semire DIKLI Florida State University Tallahassee, FL, USA ABSTRACT The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES has revealed that computers have the capacity to function as a more effective cognitive tool (Attali, 2004. AES is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003. Revision and feedback are essential aspects of the writing process. Students need to receive feedback in order to increase their writing quality. However, responding to student papers can be a burden for teachers. Particularly if they have large number of students and if they assign frequent writing assignments, providing individual feedback to student essays might be quite time consuming. AES systems can be very useful because they can provide the student with a score as well as feedback within seconds (Page, 2003. Four types of AES systems, which are widely used by testing companies, universities, and public schools: Project Essay Grader (PEG, Intelligent Essay Assessor (IEA, E-rater, and IntelliMetric. AES is a developing technology. Many AES systems are used to overcome time, cost, and generalizability issues in writing assessment. The accuracy and reliability of these systems have been proven to be high. The search for excellence in machine scoring of essays is continuing and numerous studies are being conducted to improve the effectiveness of the AES systems.
Fetal Biophysical Profile Scoring
H.R. HaghighatKhah
2009-01-01
Full Text Available "nFetal biophysical profile scoring is a sonographic-based method of fetal assessment first described by Manning and Platt in 1980. "nThe biophysical profile score was developed as a method to integrate real-time observations of the fetus and his/her intrauterine environment in order to more comprehensively assess the fetal condition. These findings must be evaluated in the context of maternal/fetal history (i.e., chronic hypertension, post-dates, intrauterine growth restriction, etc, fetal structural integrity (presence or absence of congenital anomalies, and the functionality of fetal support structures (placental and umbilical cord. For example, acute asphyxia due to placental abruption may result in an absence of the acute variables of the biophysical profile score (fetal breathing movements, fetal movement, fetal tone, and fetal heart rate reactivity with a normal amniotic fluid volume. With post maturity the asphyxial event may be intermittent and chronic resulting in a decrease in amniotic fluid volume, but with the acute variables remaining normal. "nWhile the 5 components of the biophysical profile score have remained unchanged since 1980 (Manning, 1980, the definitions of a normal and abnormal parameter have evolved with increasing experience. "nIn 1984 the definition of oligohydramnios was increased from < 1cm pocket of fluid to < 2.0 x 1.0 cm pocket. Oligohydramnios is now defined as a pocket of amniotic fluid < 2.0 x 2.0 cm (Manning, 1995a "nIf the four ultrasound variables are normal, the accuracy of the biophysical profile score was not found to be significantly improved by adding the non-stress test. As a result, in 1987 the profile score was modified to incorporate the non-stress test only when one of the ultrasound variables was abnormal (Manning 1987. Table 1 outlines the current definitions for quantifying a variable as present or absent. "nEach of the 5 components of the biophysical profile score does not have equal
刘建; 王琪洁; 张昊
2013-01-01
Aiming to resolve the edge effect in the process of predicting length of day (LOD) by the least squares and autoregressive (LS+AR) model,we employed a time series analysis model to extrapolate LOD series and produce a new series.Then,we used the new series to solve the coefficients for the LS model.At last,we used the LS+AR model to predict the LOD series again.By comparing the accuracy of LOD prediction by edge-effect corrected LS +AR and that by LS+AR,we conclude that edge-effect corrected LS+AR can improve the prediction accuracy,especially for medium-term and long-term predictions.%针对LS+AR模型在日长变化预报过程中存在的端部效应现象,采用时间序列分析方法对日长变化的序列进行外推,形成一个新的序列,用这个新序列求得LS模型的系数,然后再用LS+ AR模型对日长变化原始序列进行预报.实验结果表明,利用端部效应改正的LS+ AR模型与LS+ AR模型相比,在日长变化的预报精度上有一定的改善,尤其在跨度为中长期时改善更为明显.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
A genome-wide linkage study of individuals with high scores on NEO personality traits.
Amin, N; Schuur, M; Gusareva, E S; Isaacs, A; Aulchenko, Y S; Kirichenko, A V; Zorkoltseva, I V; Axenovich, T I; Oostra, B A; Janssens, A C J W; van Duijn, C M
2012-10-01
The NEO-Five-Factor Inventory divides human personality traits into five dimensions: neuroticism, extraversion, openness, conscientiousness and agreeableness. In this study, we sought to identify regions harboring genes with large effects on the five NEO personality traits by performing genome-wide linkage analysis of individuals scoring in the extremes of these traits (>90th percentile). Affected-only linkage analysis was performed using an Illumina 6K linkage array in a family-based study, the Erasmus Rucphen Family study. We subsequently determined whether distinct, segregating haplotypes found with linkage analysis were associated with the trait of interest in the population. Finally, a dense single-nucleotide polymorphism genotyping array (Illumina 318K) was used to search for copy number variations (CNVs) in the associated regions. In the families with extreme phenotype scores, we found significant evidence of linkage for conscientiousness to 20p13 (rs1434789, log of odds (LOD)=5.86) and suggestive evidence of linkage (LOD >2.8) for neuroticism to 19q, 21q and 22q, extraversion to 1p, 1q, 9p and12q, openness to 12q and 19q, and agreeableness to 2p, 6q, 17q and 21q. Further analysis determined haplotypes in 21q22 for neuroticism (P-values = 0.009, 0.007), in 17q24 for agreeableness (marginal P-value = 0.018) and in 20p13 for conscientiousness (marginal P-values = 0.058, 0.038) segregating in families with large contributions to the LOD scores. No evidence for CNVs in any of the associated regions was found. Our findings imply that there may be genes with relatively large effects involved in personality traits, which may be identified with next-generation sequencing techniques.
Credit scoring for individuals
Maria DIMITRIU
2010-12-01
Full Text Available Lending money to different borrowers is profitable, but risky. The profits come from the interest rate and the fees earned on the loans. Banks do not want to make loans to borrowers who cannot repay them. Even if the banks do not intend to make bad loans, over time, some of them can become bad. For instance, as a result of the recent financial crisis, the capability of many borrowers to repay their loans were affected, many of them being on default. That’s why is important for the bank to monitor the loans. The purpose of this paper is to focus on credit scoring main issues. As a consequence of this, we presented in this paper the scoring model of an important Romanian Bank. Based on this credit scoring model and taking into account the last lending requirements of the National Bank of Romania, we developed an assessment tool, in Excel, for retail loans which is presented in the case study.
Earthquake forecast enrichment scores
Christine Smyth
2012-03-01
Full Text Available The Collaboratory for the Study of Earthquake Predictability (CSEP is a global project aimed at testing earthquake forecast models in a fair environment. Various metrics are currently used to evaluate the submitted forecasts. However, the CSEP still lacks easily understandable metrics with which to rank the universal performance of the forecast models. In this research, we modify a well-known and respected metric from another statistical field, bioinformatics, to make it suitable for evaluating earthquake forecasts, such as those submitted to the CSEP initiative. The metric, originally called a gene-set enrichment score, is based on a Kolmogorov-Smirnov statistic. Our modified metric assesses if, over a certain time period, the forecast values at locations where earthquakes have occurred are significantly increased compared to the values for all locations where earthquakes did not occur. Permutation testing allows for a significance value to be placed upon the score. Unlike the metrics currently employed by the CSEP, the score places no assumption on the distribution of earthquake occurrence nor requires an arbitrary reference forecast. In this research, we apply the modified metric to simulated data and real forecast data to show it is a powerful and robust technique, capable of ranking competing earthquake forecasts.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Cardiovascular risk score in Rheumatoid Arthritis
Wagan, Abrar Ahmed; Mahmud, Tafazzul E Haque; Rasheed, Aflak; Zafar, Zafar Ali; Rehman, Ata ur; Ali, Amjad
2016-01-01
Objective: To determine the 10-year Cardiovascular risk score with QRISK-2 and Framingham risk calculators in Rheumatoid Arthritis and Non Rheumatoid Arthritis subjects and asses the usefulness of QRISK-2 and Framingham calculators in both groups. Methods: During the study 106 RA and 106 Non RA patients age and sex matched participants were enrolled from outpatient department. Demographic data and questions regarding other study parameters were noted. After 14 hours of fasting 5 ml of venous blood was drawn for Cholesterol and HDL levels, laboratory tests were performed on COBAS c III (ROCHE). QRISK-2 and Framingham risk calculators were used to get individual 10-year CVD risk score. Results: In this study the mean age of RA group was (45.1±9.5) for Non RA group (43.7±8.2), with female gender as common. The mean predicted 10-year score with QRISK-2 calculator in RA group (14.2±17.1%) and Non RA group was (13.2±19.0%) with (p-value 0.122). The 10-year score with Framingham risk score in RA group was (12.9±10.4%) and Non RA group was (8.9±8.7%) with (p-value 0.001). In RA group QRISK-2 (24.5%) and FRS (31.1%) cases with predicted score were in higher risk category. The maximum agreement scores between both calculators was observed in both groups (Kappa = 0.618 RA Group; Kappa = 0.671 Non RA Group). Conclusion: QRISK-2 calculator is more appropriate as it takes RA, ethnicity, CKD, and Atrial fibrillation as factors in risk assessment score. PMID:27375684
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Maximum Potential Score (MPS): An operating model for a successful customer-focused strategy.
2015-01-01
One of marketers’ chief objectives is to achieve customer loyalty, which is a key factor for profitable growth. Therefore, they need to develop a strategy that attracts and maintains customers, giving them adequate motives, both tangible (prices and promotions) and intangible (personalized service and treatment), to satisfy a customer and make him loyal to the company. Finding a way to accurately measure satisfaction and customer loyalty is very important. With regard to typical Relationship ...
The International Bleeding Risk Score
Laursen, Stig Borbjerg; Laine, L.; Dalton, H.
2017-01-01
The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding.......The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding....
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Fingerprinting of music scores
Irons, Jonathan; Schmucker, Martin
2004-06-01
Publishers of sheet music are generally reluctant in distributing their content via the Internet. Although online sheet music distribution's advantages are numerous the potential risk of Intellectual Property Rights (IPR) infringement, e.g. illegal online distributions, disables any innovation propensity. While active protection techniques only deter external risk factors, additional technology is necessary to adequately treat further risk factors. For several media types including music scores watermarking technology has been developed, which ebeds information in data by suitable data modifications. Furthermore, fingerprinting or perceptual hasing methods have been developed and are being applied especially for audio. These methods allow the identification of content without prior modifications. In this article we motivate the development of watermarking and fingerprinting technologies for sheet music. Outgoing from potential limitations of watermarking methods we explain why fingerprinting methods are important for sheet music and address potential applications. Finally we introduce a condept for fingerprinting of sheet music.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
[Scoring--criteria for operability].
Oestern, H J
1997-01-01
For therapeutic recommendations three different kinds of scores are essential: 1. The severity scores for trauma; 2. Severity scores for mangled extremities; 3. Intensive care scores. The severity of polytrauma patients is measurable by the AIS, ISS, RTS, PTS and TRISS which is a combination of RTS, ISS, age, and mechanism of injury. For mangled extremities there are also different scores available: MESI (Mangled Extremity Syndrome Index) and MESS (Mangled Extremity Severity Score). The aim of these scores is to assist in the indication with regard to amputate or to save the extremity. These scoring indices can be used to evaluate the severity of a systemic inflammatory reaction syndrome with respect to multiple organ failure. All scores are dynamic values which are variable with improvement of therapy.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Relationship of Apgar Scores and Bayley Mental and Motor Scores
Serunian, Sally A.; Broman, Sarah H.
1975-01-01
Examined the relationship of newborns' 1-minute Apgar scores to their 8-month Bayley mental and motor scores and to 8-month classifications of their development as normal, suspect, or abnormal. Also investigated relationships between Apgar scores and race, longevity, and birth weight. (JMB)
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Siana Halim
2014-01-01
Full Text Available It is generally easier to predict defaults accurately if a large data set (including defaults is available for estimating the prediction model. This puts not only small banks, which tend to have smaller data sets, at disadvantage. It can also pose a problem for large banks that began to collect their own historical data only recently, or banks that recently introduced a new rating system. We used a Bayesian methodology that enables banks with small data sets to improve their default probability. Another advantage of the Bayesian method is that it provides a natural way for dealing with structural differences between a bank’s internal data and additional, external data. In practice, the true scoring function may differ across the data sets, the small internal data set may contain information that is missing in the larger external data set, or the variables in the two data sets are not exactly the same but related. Bayesian method can handle such kind of problem.
Developmental Sentence Scoring for Japanese
Miyata, Susanne; MacWhinney, Brian; Otomo, Kiyoshi; Sirai, Hidetosi; Oshima-Takane, Yuriko; Hirakawa, Makiko; Shirai, Yasuhiro; Sugiura, Masatoshi; Itoh, Keiko
2013-01-01
This article reports on the development and use of the Developmental Sentence Scoring for Japanese (DSSJ), a new morpho-syntactical measure for Japanese constructed after the model of Lee's English Developmental Sentence Scoring model. Using this measure, the authors calculated DSSJ scores for 84 children divided into six age groups between 2;8…
McCluskey, Neal
2017-01-01
Since at least the enactment of No Child Left Behind in 2002, standardized test scores have served as the primary measures of public school effectiveness. Yet, such scores fail to measure the ultimate goal of education: maximizing happiness. This exploratory analysis assesses nation level associations between test scores and happiness, controlling…
Line Lengths and Starch Scores.
Moriarty, Sandra E.
1986-01-01
Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
global appearance model across the entire sensor network. constrained maximum likelihood estimation, mixture of factor analyzers, Newton’s method...ARL-TR-7836• SEP 2016 US Army Research Laboratory Constrained Fisher Scoring for a Mixture of Factor Analyzers by Gene T Whipps, Emre Ertin, and...TR-7836• SEP 2016 US Army Research Laboratory Constrained Fisher Scoring for a Mixture of Factor Analyzers by Gene T Whipps Sensors and Electron
Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis
Vult von Steyern, Kristina; Bjoerkman-Burtscher, Isabella M.; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats [Skaane University Hospital, Lund University, Centre for Medical Imaging and Physiology, Lund (Sweden); Hoeglund, Peter [Skaane University Hospital, Competence Centre for Clinical Research, Lund (Sweden)
2012-12-15
To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. (orig.)
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
[Propensity score matching in SPSS].
Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli
2015-11-01
To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.
Confidence scores for prediction models
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...
Modelling sequentially scored item responses
Akkermans, W.
2000-01-01
The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is c
Classification of current scoring functions.
Liu, Jie; Wang, Renxiao
2015-03-23
Scoring functions are a class of computational methods widely applied in structure-based drug design for evaluating protein-ligand interactions. Dozens of scoring functions have been published since the early 1990s. In literature, scoring functions are typically classified as force-field-based, empirical, and knowledge-based. This classification scheme has been quoted for more than a decade and is still repeatedly quoted by some recent publications. Unfortunately, it does not reflect the recent progress in this field. Besides, the naming convention used for describing different types of scoring functions has been somewhat jumbled in literature, which could be confusing for newcomers to this field. Here, we express our viewpoint on an up-to-date classification scheme and appropriate naming convention for current scoring functions. We propose that they can be classified into physics-based methods, empirical scoring functions, knowledge-based potentials, and descriptor-based scoring functions. We also outline the major difference and connections between different categories of scoring functions.
The Machine Scoring of Writing
McCurry, Doug
2010-01-01
This article provides an introduction to the kind of computer software that is used to score student writing in some high stakes testing programs, and that is being promoted as a teaching and learning tool to schools. It sketches the state of play with machines for the scoring of writing, and describes how these machines work and what they do.…
Skyrocketing Scores: An Urban Legend
Krashen, Stephen
2005-01-01
A new urban legend claims, "As a result of the state dropping bilingual education, test scores in California skyrocketed." Krashen disputes this theory, pointing out that other factors offer more logical explanations of California's recent improvements in SAT-9 scores. He discusses research on the effects of California's Proposition 227,…
Quadratic prediction of factor scores
Wansbeek, T
1999-01-01
Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. II is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic
Trends in Classroom Observation Scores
Casabianca, Jodi M.; Lockwood, J. R.; McCaffrey, Daniel F.
2015-01-01
Observations and ratings of classroom teaching and interactions collected over time are susceptible to trends in both the quality of instruction and rater behavior. These trends have potential implications for inferences about teaching and for study design. We use scores on the Classroom Assessment Scoring System-Secondary (CLASS-S) protocol from…
D-score: a search engine independent MD-score.
Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P
2013-03-01
While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Bhomi, K K; Subedi, N; Panta, P P
2017-01-01
International prostate symptom score is a validated questionnaire used to evaluate the lower urinary tract symptoms in benign prostatic hyperplasia. Visual prostate symptom score is a new simplified symptom score with pictograms to evaluate the same. We evaluated the correlation of visual prostate symptom score with international prostate symptom score and uroflowmetry parameters in Nepalese male patients with lower urinary tract symptoms. Male patients aged ≥40 years attending the Urology clinic were enrolled in the study. They were given international prostate symptom score and visual prostate symptom score questionnaires to complete providing assistance whenever needed. Demographic data, examination findings and uroflowmetry parameters were noted. Correlation and regression analysis was used to identify correlation of the two scoring systems and uroflowmetry parameters. Among the 66 patients enrolled, only 10 (15.15%) patients were able to understand English language. There was a statistically significant correlation between total visual prostate symptom score and international prostate symptom score (r= 0.822; Pcorrelations between individual scores of the two scoring systems related to force of urinary stream, frequency, nocturia and quality of life were also statistically significant. There was also a statistically significant correlation of both scores with maximum flow rate and average flow rate. There is a statistically significant correlation of visual prostate symptom score with international prostate symptom score and uroflowmetry parameters. IPSS can be replaced with simple VPSS in evaluation of lower urinary tract symptoms in elderly male patients.
1983-07-01
equal to the maximum value for this index is due to the dependence of this index upon the magnitude and sign of the factor loadings. Gorsuch (1974, p...Measurement, 1972, 9, 205-207. Gorsuch , R. L. Factor analysis. Philadelphia: W. B. Saunders Company, 1974. Guilford, J. P. A simple scoring weight for test
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Obstetrical disseminated intravascular coagulation score.
Kobayashi, Takao
2014-06-01
Obstetrical disseminated intravascular coagulation (DIC) is usually a very acute, serious complication of pregnancy. The obstetrical DIC score helps with making a prompt diagnosis and starting treatment early. This DIC score, in which higher scores are given for clinical parameters rather than for laboratory parameters, has three components: (i) the underlying diseases; (ii) the clinical symptoms; and (iii) the laboratory findings (coagulation tests). It is justifiably appropriate to initiate therapy for DIC when the obstetrical DIC score reaches 8 points or more before obtaining the results of coagulation tests. Improvement of blood coagulation tests and clinical symptoms are essential to the efficacy evaluation for treatment after a diagnosis of obstetrical DIC. Therefore, the efficacy evaluation criteria for obstetrical DIC are also defined to enable follow-up of the clinical efficacy of DIC therapy.
... Development Infections Diseases & Conditions Pregnancy & Baby Nutrition & Fitness Emotions & Behavior School & Family Life First Aid & Safety Doctors & ... 2 being the best score: A ppearance (skin color) P ulse (heart rate) G rimace response (reflexes) ...
From Rasch scores to regression
Christensen, Karl Bang
2006-01-01
Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....
Commercial Building Energy Asset Score
2017-05-26
This software (Asset Scoring Tool) is designed to help building owners and managers to gain insight into the as-built efficiency of their buildings. It is a web tool where users can enter their building information and obtain an asset score report. The asset score report consists of modeled building energy use (by end use and by fuel type), building systems (envelope, lighting, heating, cooling, service hot water) evaluations, and recommended energy efficiency measures. The intended users are building owners and operators who have limited knowledge of building energy efficiency. The scoring tool collects minimum building data (~20 data entries) from users and build a full-scale energy model using the inference functionalities from Facility Energy Decision System (FEDS). The scoring tool runs real-time building energy simulation using EnergyPlus and performs life-cycle cost analysis using FEDS. An API is also under development to allow the third-party applications to exchange data with the web service of the scoring tool.
Rhee CK
2015-08-01
more symptomatic. We aimed to identify the ideal CAT score that exhibits minimal discrepancy with the mMRC score.Methods: A receiver operating characteristic curve of the CAT score was generated for an mMRC scores of 1 and 2. A concordance analysis was applied to quantify the association between the frequencies of patients categorized into GOLD groups A–D using symptom cutoff points. A κ-coefficient was calculated.Results: For an mMRC score of 2, a CAT score of 15 showed the maximum value of Youden’s index with a sensitivity and specificity of 0.70 and 0.66, respectively (area under the receiver operating characteristic curve [AUC] 0.74; 95% confidence interval [CI], 0.70–0.77. For an mMRC score of 1, a CAT score of 10 showed the maximum value of Youden’s index with a sensitivity and specificity of 0.77 and 0.65, respectively (AUC 0.77; 95% CI, 0.72–0.83. The κ value for concordance was highest between an mMRC score of 1 and a CAT score of 10 (0.66, followed by an mMRC score of 2 and a CAT score of 15 (0.56, an mMRC score of 2 and a CAT score of 10 (0.47, and an mMRC score of 1 and a CAT score of 15 (0.43.Conclusion: A CAT score of 10 was most concordant with an mMRC score of 1 when classifying patients with COPD into GOLD groups A–D. However, a discrepancy remains between the CAT and mMRC scoring systems. Keywords: COPD, CAT, mMRC, concordance, discrepancy
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Skin scoring in systemic sclerosis
Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars
1994-01-01
Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...
Skin scoring in systemic sclerosis
Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars
1994-01-01
Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...
Developing Scoring Algorithms (Earlier Methods)
We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.
Antonio Oliveira-Neto
2012-01-01
Full Text Available Objective. To evaluate the performance of Sequential Organ Failure Assessment (SOFA score in cases of severe maternal morbidity (SMM. Design. Retrospective study of diagnostic validation. Setting. An obstetric intensive care unit (ICU in Brazil. Population. 673 women with SMM. Main Outcome Measures. mortality and SOFA score. Methods. Organ failure was evaluated according to maximum score for each one of its six components. The total maximum SOFA score was calculated using the poorest result of each component, reflecting the maximum degree of alteration in systemic organ function. Results. highest total maximum SOFA score was associated with mortality, 12.06 ± 5.47 for women who died and 1.87 ± 2.56 for survivors. There was also a significant correlation between the number of failing organs and maternal mortality, ranging from 0.2% (no failure to 85.7% (≥3 organs. Analysis of the area under the receiver operating characteristic (ROC curve (AUC confirmed the excellent performance of total maximum SOFA score for cases of SMM (AUC = 0.958. Conclusions. Total maximum SOFA score proved to be an effective tool for evaluating severity and estimating prognosis in cases of SMM. Maximum SOFA score may be used to conceptually define and stratify the degree of severity in cases of SMM.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Gasselseder, Hans-Peter
2014-01-01
This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self-report questionnai......This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self...... that a compatible integration of global and local goals in the ludonarrative contributes to a motivational-emotional reinforcement that can be gained through musical feedback. Shedding light on the implications of music dramaturgy within a semantic ecology paradigm, the perception of varying relational attributes...
Estimating Decision Indices Based on Composite Scores
Knupp, Tawnya Lee
2009-01-01
The purpose of this study was to develop an IRT model that would enable the estimation of decision indices based on composite scores. The composite scores, defined as a combination of unidimensional test scores, were either a total raw score or an average scale score. Additionally, estimation methods for the normal and compound multinomial models…
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
CytoMCS: A Multiple Maximum Common Subgraph Detection Tool for Cytoscape
Larsen, Simon; Baumbach, Jan
2017-01-01
such analyses we have developed CytoMCS, a Cytoscape app for computing inexact solutions to the maximum common edge subgraph problem for two or more graphs. Our algorithm uses an iterative local search heuristic for computing conserved subgraphs, optimizing a squared edge conservation score that is able...
Carla Franchi-Pinto
1999-03-01
Full Text Available Intraclass correlation coefficients for one- and five-min Apgar scores of 604 twin pairs born at a southeastern Brazilian hospital were calculated, after adjusting these scores for gestational age and sex. The data support a genetic hypothesis only for 1-min Apgar score, probably because it is less affected by the environment than 4 min later, after the newborns have been under the care of a neonatology team. First-born twins exhibited, on average, better clinical conditions than second-born twins. The former showed a significantly lower proportion of Apgar scores under seven than second-born twins, both at 1 min (17.5% vs. 29.8% and at 5 min (7.2% vs. 11.9%. The proportion of children born with "good" Apgar scores was significantly smaller among twins than among 1,522 singletons born at the same hospital. Among the latter, 1- and 5-min Apgar scores under seven were exhibited by 9.2% and 3.4% newborns, respectively.Os coeficientes de correlação intraclasse foram calculados para os índices de Apgar 1 e 5 minutos após o nascimento de 604 pares de gêmeos em uma maternidade do sudeste brasileiro, depois que esses índices foram ajustados para idade gestacional e sexo. Os dados obtidos apoiaram a hipótese genética apenas em relação ao primeiro índice de Apgar, provavelmente porque ele é menos influenciado pelo ambiente do que 4 minutos depois, quando os recém-nascidos já estiveram sob os cuidados de uma equipe de neonatologistas. Os gêmeos nascidos em primeiro lugar apresentaram, em média, melhor estado clínico que os nascidos em segundo lugar, visto que os primeiros mostraram uma proporção de índices de Apgar inferiores a 7 significativamente menor do que os nascidos em segundo lugar, tanto um minuto (17,5% contra 29,8% quanto cinco minutos após o nascimento (7,2% contra 11,9%. A proporção de recém-nascidos com índices de Apgar que indicam bom prognóstico foi significativamente menor nos gêmeos do que em 1.522 conceptos
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Composite MRI scores improve correlation with EDSS in multiple sclerosis.
Poonawalla, A H; Datta, S; Juneja, V; Nelson, F; Wolinsky, J S; Cutter, G; Narayana, P A
2010-09-01
Quantitative measures derived from magnetic resonance imaging (MRI) have been widely investigated as non-invasive biomarkers in multiple sclerosis (MS). However, the correlation of single measures with Expanded Disability Status Scale (EDSS) is poor, especially for studies with large population samples. To explore the correlation of MRI-derived measures with EDSS through composite MRI scores. Magnetic resonance images of 126 patients with relapsing-remitting MS were segmented into white and gray matter, cerebrospinal fluid, T2-hyperintense lesions, gadolinium contrast-enhancing lesions, T1-hypointense lesions ('black holes': BH). The volumes and average T2 values for each of these tissues and lesions were calculated and converted to a z-score (in units of standard deviation from the mean). These z-scores were combined to construct composite z-scores, and evaluated against individual z-scores for correlation with EDSS. Composite scores including relaxation times of different tissues and/or volumetric measures generally correlated more strongly with EDSS than individual measures. The maximum observed correlation of a composite with EDSS was r = 0.344 (p EDSS.
Maximum Likelihood Learning of Conditional MTE Distributions
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE speciﬁcations and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables....... Finally, experimental results demonstrate the applicability of the learning procedure as well as the expressive power of the conditional MTE distribution....
The Tipping Point: F-Score as a Function of the Number of Retrieved Items
Guns, Raf; Lioma, Christina; Larsen, Birger
2012-01-01
One of the best known measures of information retrieval (IR) performance is the F-score, the harmonic mean of precision and recall. In this article we show that the curve of the F-score as a function of the number of retrieved items is always of the same shape: a fast concave increase to a maximum......, followed by a slow decrease. In other words, there exists a single maximum, referred to as the tipping point, where the retrieval situation is 'ideal' in terms of the F-score. The tipping point thus indicates the optimal number of items to be retrieved, with more or less items resulting in a lower F...
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Interpreting force concept inventory scores: Normalized gain and SAT scores
Vincent P. Coletta
2007-05-01
Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292, and strong, positive correlations were found for both populations (r=0.57 and r=0.46, respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.
Interpreting force concept inventory scores: Normalized gain and SAT scores
Jeffrey J. Steinert
2007-05-01
Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292 , and strong, positive correlations were found for both populations ( r=0.57 and r=0.46 , respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.
Bias Adjusted Precipitation Threat Scores
F. Mesinger
2008-04-01
Full Text Available Among the wide variety of performance measures available for the assessment of skill of deterministic precipitation forecasts, the equitable threat score (ETS might well be the one used most frequently. It is typically used in conjunction with the bias score. However, apart from its mathematical definition the meaning of the ETS is not clear. It has been pointed out (Mason, 1989; Hamill, 1999 that forecasts with a larger bias tend to have a higher ETS. Even so, the present author has not seen this having been accounted for in any of numerous papers that in recent years have used the ETS along with bias "as a measure of forecast accuracy".
A method to adjust the threat score (TS or the ETS so as to arrive at their values that correspond to unit bias in order to show the model's or forecaster's accuracy in extit{placing} precipitation has been proposed earlier by the present author (Mesinger and Brill, the so-called dH/dF method. A serious deficiency however has since been noted with the dH/dF method in that the hypothetical function that it arrives at to interpolate or extrapolate the observed value of hits to unit bias can have values of hits greater than forecast when the forecast area tends to zero. Another method is proposed here based on the assumption that the increase in hits per unit increase in false alarms is proportional to the yet unhit area. This new method removes the deficiency of the dH/dF method. Examples of its performance for 12 months of forecasts by three NCEP operational models are given.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
The HEART score for chest pain patients
Backus, B.E.
2012-01-01
The HEART score was developed to improve risk stratification in chest pain patients in the emergency department (ED). This thesis describes series of validation studies of the HEART score and sub studies for individual elements of the score. The predictive value of the HEART score for the occurrence
Scoring and Standard Setting with Standardized Patients.
Norcini, John J.; And Others
1993-01-01
The continuous method of scoring a performance test composed of standardized patients was compared with a derivative method that assigned each of the 131 examinees (medical residents) a dichotomous score, and use of Angoff's method with these scoring methods was studied. Both methods produce reasonable means and distributions of scores. (SLD)
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Score lists in multipartite hypertournaments
Pirzada, Shariefuddin; Iványi, Antal
2010-01-01
Given non-negative integers $n_{i}$ and $\\alpha_{i}$ with $0 \\leq \\alpha_{i} \\leq n_i$ $(i=1,2,...,k)$, an $[\\alpha_{1},\\alpha_{2},...,\\alpha_{k}]$-$k$-partite hypertournament on $\\sum_{1}^{k}n_{i}$ vertices is a $(k+1)$-tuple $(U_{1},U_{2},...,U_{k},E)$, where $U_{i}$ are $k$ vertex sets with $|U_{i}|=n_{i}$, and $E$ is a set of $\\sum_{1}^{k}\\alpha_{i}$-tuples of vertices, called arcs, with exactly $\\alpha_{i}$ vertices from $U_{i}$, such that any $\\sum_{1}^{k}\\alpha_{i}$ subset $\\cup_{1}^{k}U_{i}^{\\prime}$ of $\\cup_{1}^{k}U_{i}$, $E$ contains exactly one of the $(\\sum_{1}^{k} \\alpha_{i})!$ $\\sum_{1}^{k}\\alpha_{i}$-tuples whose entries belong to $\\cup_{1}^{k}U_{i}^{\\prime}$. We obtain necessary and sufficient conditions for $k$ lists of non-negative integers in non-decreasing order to be the losing score lists and to be the score lists of some $k$-partite hypertournament.
Disclosure Risk from Factor Scores
Drechsler Jörg
2014-03-01
Full Text Available Remote access can be a powerful tool for providing data access for external researchers. Since the microdata never leave the secure environment of the data-providing agency, alterations of the microdata can be kept to a minimum. Nevertheless, remote access is not free from risk. Many statistical analyses that do not seem to provide disclosive information at first sight can be used by sophisticated intruders to reveal sensitive information. For this reason the list of allowed queries is usually restricted in a remote setting. However, it is not always easy to identify problematic queries. We therefore strongly support the argument that has been made by other authors: that all queries should be monitored carefully and that any microlevel information should always be withheld. As an illustrative example, we use factor score analysis, for which the output of interest - the factor loading of the variables - seems to be unproblematic. However, as we show in the article, the individual factor scores that are usually returned as part of the output can be used to reveal sensitive information. Our empirical evaluations based on a German establishment survey emphasize that this risk is far from a purely theoretical problem.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Bangalore, Harish; Gaies, Michael; Ocampo, Elena C; Heinle, Jeffrey S; Guffey, Danielle; Minard, Charles G; Checchia, Paul; Shekerdemian, Lara S
2017-08-01
The aim of the present study was to explore and compare the association between a new vasoactive score - the Total Inotrope Exposure Score - and outcome and the established Vasoactive Inotrope Score in children undergoing cardiac surgery with cardiopulmonary bypass DESIGN: The present study was a single-centre, retrospective study. The study was carried out at a 21-bed cardiovascular ICU in a Tertiary Children's Hospital between September, 2010 and May, 2011 METHODS: The Total Inotrope Exposure Score is a new vasoactive score that brings together cumulative vasoactive drug exposure and incorporates dose adjustments over time. The performance of these scores - average, maximum Vasoactive Inotrope Score at 24 and 48 hours, and Total Inotrope Exposure Score - to predict primary clinical outcomes - either death, cardiopulmonary resuscitation, or extra-corporeal membrane oxygenation before hospital discharge - and secondary outcomes - length of invasive mechanical ventilation, length of ICU stay, and hospital stay - was calculated. Main results The study cohort included 167 children under 18 years of age, with 37 (22.2%) neonates and 65 (41.3%) infants aged between 1 month and 1 year. The Total Inotrope Exposure Score best predicted the primary outcome (six of 167 cases) with an unadjusted odds ratio for a poor outcome of 42 (4.8, 369.6). Although the area under curve was higher than other scores, this difference did not reach statistical significance. The Total Inotrope Exposure Score best predicted prolonged invasive mechanical ventilation, length of ICU stay, and hospital stay as compared with the other scores. The Total Inotrope Exposure Score appears to have a good association with poor postoperative outcomes and warrants prospective validation across larger numbers of patients across institutions.
Cardiovascular risk scores for coronary atherosclerosis.
Yalcin, Murat; Kardesoglu, Ejder; Aparci, Mustafa; Isilak, Zafer; Uz, Omer; Yiginer, Omer; Ozmen, Namik; Cingozbay, Bekir Yilmaz; Uzun, Mehmet; Cebeci, Bekir Sitki
2012-10-01
The objective of this study was to compare frequently used cardiovascular risk scores in predicting the presence of coronary artery disease (CAD) and 3-vessel disease. In 350 consecutive patients (218 men and 132 women) who underwent coronary angiography, the cardiovascular risk level was determined using the Framingham Risk Score (FRS), the Modified Framingham Risk Score (MFRS), the Prospective Cardiovascular Münster (PROCAM) score, and the Systematic Coronary Risk Evaluation (SCORE). The area under the curve for receiver operating characteristic curves showed that FRS had more predictive value than the other scores for CAD (area under curve, 0.76, P MFRS, PROCAM, and SCORE) may predict the presence and severity of coronary atherosclerosis.The FRS had better predictive value than the other scores.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
An ultrasound score for knee osteoarthritis
Riecke, B F; Christensen, R.; Torp-Pedersen, S
2014-01-01
OBJECTIVE: To develop standardized musculoskeletal ultrasound (MUS) procedures and scoring for detecting knee osteoarthritis (OA) and test the MUS score's ability to discern various degrees of knee OA, in comparison with plain radiography and the 'Knee injury and Osteoarthritis Outcome Score' (KO...
Breaking of scored tablets : a review
van Santen, E; Barends, D M; Frijlink, H W
2002-01-01
The literature was reviewed regarding advantages, problems and performance indicators of score lines. Scored tablets provide dose flexibility, ease of swallowing and may reduce the costs of medication. However, many patients are confronted with scored tablets that are broken unequally and with diffi
Developing Score Reports for Cognitive Diagnostic Assessments
Roberts, Mary Roduta; Gierl, Mark J.
2010-01-01
This paper presents a framework to provide a structured approach for developing score reports for cognitive diagnostic assessments ("CDAs"). Guidelines for reporting and presenting diagnostic scores are based on a review of current educational test score reporting practices and literature from the area of information design. A sample diagnostic…
Credit Scores, Race, and Residential Sorting
Nelson, Ashlyn Aiko
2010-01-01
Credit scores have a profound impact on home purchasing power and mortgage pricing, yet little is known about how credit scores influence households' residential location decisions. This study estimates the effects of credit scores on residential sorting behavior using a novel mortgage industry data set combining household demographic, credit, and…
Credit Scores, Race, and Residential Sorting
Nelson, Ashlyn Aiko
2010-01-01
Credit scores have a profound impact on home purchasing power and mortgage pricing, yet little is known about how credit scores influence households' residential location decisions. This study estimates the effects of credit scores on residential sorting behavior using a novel mortgage industry data set combining household demographic, credit, and…
Semiparametric score sevel susion: Gaussian sopula approach
Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, R.N.J.; Spreeuwers, L.J.
2015-01-01
Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the opti
An objective fluctuation score for Parkinson's disease.
Malcolm K Horne
Full Text Available Establishing the presence and severity of fluctuations is important in managing Parkinson's Disease yet there is no reliable, objective means of doing this. In this study we have evaluated a Fluctuation Score derived from variations in dyskinesia and bradykinesia scores produced by an accelerometry based system.The Fluctuation Score was produced by summing the interquartile range of bradykinesia scores and dyskinesia scores produced every 2 minutes between 0900-1800 for at least 6 days by the accelerometry based system and expressing it as an algorithm.This Score could distinguish between fluctuating and non-fluctuating patients with high sensitivity and selectivity and was significant lower following activation of deep brain stimulators. The scores following deep brain stimulation lay in a band just above the score separating fluctuators from non-fluctuators, suggesting a range representing adequate motor control. When compared with control subjects the score of newly diagnosed patients show a loss of fluctuation with onset of PD. The score was calculated in subjects whose duration of disease was known and this showed that newly diagnosed patients soon develop higher scores which either fall under or within the range representing adequate motor control or instead go on to develop more severe fluctuations.The Fluctuation Score described here promises to be a useful tool for identifying patients whose fluctuations are progressing and may require therapeutic changes. It also shows promise as a useful research tool. Further studies are required to more accurately identify therapeutic targets and ranges.
Empirical evaluation of scoring functions for Bayesian network model selection.
Liu, Zhifa; Malone, Brandon; Yuan, Changhe
2012-01-01
In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also
Committee Opinion No. 644: The Apgar Score.
2015-10-01
The Apgar score provides an accepted and convenient method for reporting the status of the newborn infant immediately after birth and the response to resuscitation if needed. The Apgar score alone cannot be considered to be evidence of or a consequence of asphyxia, does not predict individual neonatal mortality or neurologic outcome, and should not be used for that purpose. An Apgar score assigned during a resuscitation is not equivalent to a score assigned to a spontaneously breathing infant. The American Academy of Pediatrics and the American College of Obstetricians and Gynecologists encourage use of an expanded Apgar score reporting form that accounts for concurrent resuscitative interventions.
WMAXC: a weighted maximum clique method for identifying condition-specific sub-network.
Amgalan, Bayarbaatar; Lee, Hyunju
2014-01-01
Sub-networks can expose complex patterns in an entire bio-molecular network by extracting interactions that depend on temporal or condition-specific contexts. When genes interact with each other during cellular processes, they may form differential co-expression patterns with other genes across different cell states. The identification of condition-specific sub-networks is of great importance in investigating how a living cell adapts to environmental changes. In this work, we propose the weighted MAXimum clique (WMAXC) method to identify a condition-specific sub-network. WMAXC first proposes scoring functions that jointly measure condition-specific changes to both individual genes and gene-gene co-expressions. It then employs a weaker formula of a general maximum clique problem and relates the maximum scored clique of a weighted graph to the optimization of a quadratic objective function under sparsity constraints. We combine a continuous genetic algorithm and a projection procedure to obtain a single optimal sub-network that maximizes the objective function (scoring function) over the standard simplex (sparsity constraints). We applied the WMAXC method to both simulated data and real data sets of ovarian and prostate cancer. Compared with previous methods, WMAXC selected a large fraction of cancer-related genes, which were enriched in cancer-related pathways. The results demonstrated that our method efficiently captured a subset of genes relevant under the investigated condition.
Suh, Youngjoo; Kim, Hoirin
2014-12-01
In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Molé, C; Simon, E
2015-06-01
The management of cleft lip, alveolar and palate sequelae remains problematic today. To optimize it, we tried to establish a new clinical index for diagnostic and prognostic purposes. Seven tissue indicators, that we consider to be important in the management of alveolar sequelae, are listed by assigning them individual scores. The final score, obtained by adding together the individual scores, can take a low, high or maximum value. We propose a new classification (ACS: Alveolar Cleft Score) that guides the therapeutic team to a prognosis approach, in terms of the recommended surgical and prosthetic reconstruction, the type of medical care required, and the preventive and supportive therapy to establish. Current studies are often only based on a standard radiological evaluation of the alveolar bone height at the cleft site. However, the gingival, the osseous and the cellular areas bordering the alveolar cleft sequelae induce many clinical parameters, which should be reflected in the morphological diagnosis, to better direct the surgical indications and the future prosthetic requirements, and to best maintain successful long term aesthetic and functional results. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Conditional Reliability Coefficients for Test Scores.
Nicewander, W Alan
2017-04-06
The most widely used, general index of measurement precision for psychological and educational test scores is the reliability coefficient-a ratio of true variance for a test score to the true-plus-error variance of the score. In item response theory (IRT) models for test scores, the information function is the central, conditional index of measurement precision. In this inquiry, conditional reliability coefficients for a variety of score types are derived as simple transformations of information functions. It is shown, for example, that the conditional reliability coefficient for an ordinary, number-correct score, X, is equal to, ρ(X,X'|θ)=I(X,θ)/[I(X,θ)+1] Where: θ is a latent variable measured by an observed test score, X; p(X, X'|θ) is the conditional reliability of X at a fixed value of θ; and I(X, θ) is the score information function. This is a surprisingly simple relationship between the 2, basic indices of measurement precision from IRT and classical test theory (CTT). This relationship holds for item scores as well as test scores based on sums of item scores-and it holds for dichotomous as well as polytomous items, or a mix of both item types. Also, conditional reliabilities are derived for computerized adaptive test scores, and for θ-estimates used as alternatives to number correct scores. These conditional reliabilities are all related to information in a manner similar-or-identical to the 1 given above for the number-correct (NC) score. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Forecasting the value of credit scoring
Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd
2017-08-01
Nowadays, credit scoring system plays an important role in banking sector. This process is important in assessing the creditworthiness of customers requesting credit from banks or other financial institutions. Usually, the credit scoring is used when customers send the application for credit facilities. Based on the score from credit scoring, bank will be able to segregate the "good" clients from "bad" clients. However, in most cases the score is useful at that specific time only and cannot be used to forecast the credit worthiness of the same applicant after that. Hence, bank will not know if "good" clients will always be good all the time or "bad" clients may become "good" clients after certain time. To fill up the gap, this study proposes an equation to forecast the credit scoring of the potential borrowers at a certain time by using the historical score related to the assumption. The Mean Absolute Percentage Error (MAPE) is used to measure the accuracy of the forecast scoring. Result shows the forecast scoring is highly accurate as compared to actual credit scoring.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Curtis, Alexander E; Smith, Tanya A; Ziganshin, Bulat A; Elefteriades, John A
2016-08-01
Reliable methods for measuring the thoracic aorta are critical for determining treatment strategies in aneurysmal disease. Z-scores are a pragmatic alternative to raw diameter sizes commonly used in adult medicine. They are particularly valuable in the pediatric population, who undergo rapid changes in physical development. The advantage of the Z-score is its inclusion of body surface area (BSA) in determining whether an aorta is within normal size limits. Therefore, Z-scores allow us to determine whether true pathology exists, which can be challenging in growing children. In addition, Z-scores allow for thoughtful interpretation of aortic size in different genders, ethnicities, and geographical regions. Despite the advantages of using Z-scores, there are limitations. These include intra- and inter-observer bias, measurement error, and variations between alternative Z-score nomograms and BSA equations. Furthermore, it is unclear how Z-scores change in the normal population over time, which is essential when interpreting serial values. Guidelines for measuring aortic parameters have been developed by the American Society of Echocardiography Pediatric and Congenital Heart Disease Council, which may reduce measurement bias when calculating Z-scores for the aortic root. In addition, web-based Z-score calculators have been developed to aid in efficient Z-score calculations. Despite these advances, clinicians must be mindful of the limitations of Z-scores, especially when used to demonstrate beneficial treatment effect. This review looks to unravel the mystery of the Z-score, with a focus on the thoracic aorta. Here, we will discuss how Z-scores are calculated and the limitations of their use.
王萍
2014-01-01
Objectives To investigate the effect of old-age care institutions in daily life of the elderly self-care ability on the quality of life,to explore the methods of improving the elderly self-care ability,so as to improve the quality of life.Methods Using the activity of daily living scale ( ADL) and WHO quality of life scale ( WHOQOL-BREF) questionnaires were administered to 74 elderly patients with senile apartment in Linyi City,and the impact of ADL of the elderly factors by correlation analysis.Results Age,chronic disease and whether to participate in the exercise or entertainment and ADL scores were negatively correlated,the differ-ence was significant;quality of life scale in all areas scores and ADL scores were negatively correlated,the difference was significant and very significant;quality of life of high score group and ADL group the low rating ( except Q1 and psychological field) ,the differ-ence was significant.Conclusions elderly people age,prevalence rate of chronic diseases,self-care ability is poor,the quality of life in general lower level.The relationship between self-care ability and quality of life in close.%目的：了解养老机构老年人日常生活自理能力对生存质量的影响，旨在探寻提高老年人生活自理能力的方法，从而提高其生活质量。方法采用日常生活能力量表（ ADL）和世界卫生组织生存质量测定量表简表（ WHOQOL－BREF）对入住临沂市老年公寓的74例老年人进行问卷调查，并对影响老年人日常生活自理能力的因素进行相关分析。结果年龄、慢性病及是否参加锻炼或娱乐活动与ADL评分均呈负相关，差异具有显著性；生存质量量表各领域评分与日常生活能力量表总分均呈负相关，差异具有显著性和非常显著性；ADL高评分组与低评分组的生存质量比较（除Q1和心理领域外），差异具有显著性。结论养老机构老年人年龄大、慢性病患病率高、生活自
The relationship between second-year medical students' OSCE scores and USMLE Step 1 scores.
Simon, Steven R; Volkan, Kevin; Hamann, Claus; Duffey, Carol; Fletcher, Suzanne W
2002-09-01
The relationship between objective structured clinical examinations (OSCEs) and standardized tests is not well known. We linked second-year medical students' physical diagnosis OSCE scores from 1998, 1999 and 2000 (n = 355) with demographic information, Medical College Admission Test (MCAT) scores, and United States Medical Licensing Examination (USMLE) Step 1 scores. The correlation coefficient for the total OSCE score with USMLE Step 1 score was 0.41 (p USMLE Step 1 score. OSCE station scores accounted for approximately 22% of the variability in USMLE Step 1 scores. A second-year OSCE in physical diagnosis is correlated with scores on the USMLE Step 1 exam, with skills that foreshadow the clinical clerkships most predictive of USMLE scores. This correlation suggests predictive validity of this OSCE and supports the use of OSCEs early in medical school.
Random Walk Picture of Basketball Scoring
Gabel, Alan
2011-01-01
We present evidence, based on play-by-play data from all 6087 games from the 2006/07--2009/10 seasons of the National Basketball Association (NBA), that basketball scoring is well described by a weakly-biased continuous-time random walk. The time between successive scoring events follows an exponential distribution, with little memory between different scoring intervals. Using this random-walk picture that is augmented by features idiosyncratic to basketball, we account for a wide variety of statistical properties of scoring, such as the distribution of the score difference between opponents and the fraction of game time that one team is in the lead. By further including the heterogeneity of team strengths, we build a computational model that accounts for essentially all statistical features of game scoring data and season win/loss records of each team.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Scoring functions for AutoDock.
Hill, Anthony D; Reilly, Peter J
2015-01-01
Automated docking allows rapid screening of protein-ligand interactions. A scoring function composed of a force field and linear weights can be used to compute a binding energy from a docked atom configuration. For different force fields or types of molecules, it may be necessary to train a custom scoring function. This chapter describes the data and methods one must consider in developing a custom scoring function for use with AutoDock.
Pneumonia severity scores in resource poor settings
Jamie Rylance
2014-06-01
Full Text Available Clinical prognostic scores are increasingly used to streamline care in well-resourced settings. The potential benefits of identifying patients at risk of clinical deterioration and poor outcome, delivering appropriate higher level clinical care, and increasing efficiency are clear. In this focused review, we examine the use and applicability of severity scores applied to patients with community acquired pneumonia in resource poor settings. We challenge clinical researchers working in such systems to consider the generalisability of existing severity scores in their populations, and where performance of scores is suboptimal, to promote efforts to develop and validate new tools for the benefit of patients and healthcare systems.
Security Risk Scoring Incorporating Computers' Environment
Eli Weintraub
2016-04-01
Full Text Available A framework of a Continuous Monitoring System (CMS is presented, having new improved capabilities. The system uses the actual real-time configuration of the system and environment characterized by a Configuration Management Data Base (CMDB which includes detailed information of organizational database contents, security and privacy specifications. The Common Vulnerability Scoring Systems' (CVSS algorithm produces risk scores incorporating information from the CMDB. By using the real updated environmental characteristics the system enables achieving accurate scores compared to existing practices. Framework presentation includes systems' design and an illustration of scoring computations.
Coronary artery calcium score: current status
Neves, Priscilla Ornellas; Andrade, Joalbo; Monção, Henry
2017-01-01
The coronary artery calcium score plays an Important role In cardiovascular risk stratification, showing a significant association with the medium- or long-term occurrence of major cardiovascular events. Here, we discuss the following: protocols for the acquisition and quantification of the coronary artery calcium score by multidetector computed tomography; the role of the coronary artery calcium score in coronary risk stratification and its comparison with other clinical scores; its indications, interpretation, and prognosis in asymptomatic patients; and its use in patients who are symptomatic or have diabetes. PMID:28670030
[The cardiovascular surgeon and the Syntax score].
Gómez-Sánchez, Mario; Soulé-Egea, Mauricio; Herrera-Alarcón, Valentín; Barragán-García, Rodolfo
2015-01-01
The Syntax score has been established as a tool to determine the complexity of coronary artery disease and as a guide for decision-making among coronary artery bypass surgery and percutaneous coronary intervention. The purpose of this review is to systematically examine what the Syntax score is, and how the surgeon should integrate the information in the selection and treatment of patients. We reviewed the results of the SYNTAX Trial, the clinical practice guidelines, as well as the benefits and limitations of the score. Finally we discuss the future directions of the Syntax score.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Widening clinical applications of the SYNTAX Score.
Farooq, Vasim; Head, Stuart J; Kappetein, Arie Pieter; Serruys, Patrick W
2014-02-01
The SYNTAX Score (http://www.syntaxscore.com) has established itself as an anatomical based tool for objectively determining the complexity of coronary artery disease and guiding decision-making between coronary artery bypass graft (CABG) surgery and percutaneous coronary intervention (PCI). Since the landmark SYNTAX (Synergy between PCI with Taxus and Cardiac Surgery) Trial comparing CABG with PCI in patients with complex coronary artery disease (unprotected left main or de novo three vessel disease), numerous validation studies have confirmed the clinical validity of the SYNTAX Score for identifying higher-risk subjects and aiding decision-making between CABG and PCI in a broad range of patient types. The SYNTAX Score is now advocated in both the European and US revascularisation guidelines for decision-making between CABG and PCI as part of a SYNTAX-pioneered heart team approach. Since establishment of the SYNTAX Score, widening clinical applications of this clinical tool have emerged. The purpose of this review is to systematically examine the widening applications of tools based on the SYNTAX Score: (1) by improving the diagnostic accuracy of the SYNTAX Score by adding a functional assessment of lesions; (2) through amalgamation of the anatomical SYNTAX Score with clinical variables to enhance decision-making between CABG and PCI, culminating in the development and validation of the SYNTAX Score II, in which objective and tailored decisions can be made for the individual patient; (3) through assessment of completeness of revascularisation using the residual and post-CABG SYNTAX Scores for PCI and CABG patients, respectively. Finally, the future direction of the SYNTAX Score is covered through discussion of the ongoing development of a non-invasive, functional SYNTAX Score and review of current and planned clinical trials.
European conformation and fat scores have no relationship with eating quality.
Bonny, S P F; Pethick, D W; Legrand, I; Wierzbicki, J; Allen, P; Farmer, L J; Polkinghorne, R J; Hocquette, J-F; Gardner, G E
2016-06-01
European conformation and fat grades are a major factor determining carcass value throughout Europe. The relationships between these scores and sensory scores were investigated. A total of 3786 French, Polish and Irish consumers evaluated steaks, grilled to a medium doneness, according to protocols of the ���Meat Standards Australia��� system, from 18 muscles representing 455 local, commercial cattle from commercial abattoirs. A mixed linear effects model was used for the analysis. There was a negative relationship between juiciness and European conformation score. For the other sensory scores, a maximum of three muscles out of a possible 18 demonstrated negative effects of conformation score on sensory scores. There was a positive effect of European fat score on three individual muscles. However, this was accounted for by marbling score. Thus, while the European carcass classification system may indicate yield, it has no consistent relationship with sensory scores at a carcass level that is suitable for use in a commercial system. The industry should consider using an additional system related to eating quality to aid in the determination of the monetary value of carcasses, rewarding eating quality in addition to yield.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
On k-hypertournament losing scores
Pirzada, Shariefuddin
2010-01-01
We give a new and short proof of a theorem on k-hypertournament losing scores due to Zhou et al. [G. Zhou, T. Yao, K. Zhang, On score sequences of k-tournaments, European J. Comb., 21, 8 (2000) 993-1000.
ON HOW CULTURAL KNOWLEDGE AFFECTS TOEFL SCORES
2000-01-01
This paper presents a study of the effect of cultur-al background on TOEFL scores.It proceeds from therelation between culture and language,then illus-trates with actual questions from various sections ofTOEFL tests how American cultural background exertsa remarkable influence on TOEFL scores,and con-cludes with revelations with regard to English teachingin this country.
Causal Moderation Analysis Using Propensity Score Methods
Dong, Nianbo
2012-01-01
This paper is based on previous studies in applying propensity score methods to study multiple treatment variables to examine the causal moderator effect. The propensity score methods will be demonstrated in a case study to examine the causal moderator effect, where the moderators are categorical and continuous variables. Moderation analysis is an…
Comparability of IQ scores over time
Must, O.; te Nijenhuis, J.; Must, A.; van Vianen, A.E.M.
2009-01-01
This study investigates the comparability of IQ scores. Three cohorts (1933/36, 1997/98, 2006) of Estonian students (N = 2173) are compared using the Estonian National Intelligence Test. After 72 years the secular rise of the IQ test scores is.79 SD. The mean .16 SD increase in the last 8 years
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Diagnosis. Severity scoring system for paediatric FMF.
Livneh, Avi
2012-04-17
Severity scoring systems for adult familial Mediterranean fever (FMF) are established and used as important clinical and analytical tools in disease management and research. A recent paper highlights the need for a paediatric FMF severity measure. How should such a score be built and what challenges might be faced?
Clinical scoring scales in thyroidology: A compendium
Sanjay Kalra
2011-01-01
Full Text Available This compendium brings together traditional as well as contemporary scoring and grading systems used for the screening and diagnosis of various thyroid diseases, dysfunctions, and complications. The article discusses scores used to help diagnose hypo-and hyperthyroidism, to grade and manage goiter and ophthalmopathy, and to assess the risk of thyroid malignancy.
Starreveld scoring method in diagnosing childhood constipation
Kokke, F.T.; Sittig, J.S.; de Bruijn, A.; Wiersma, T.; van Rijn, R.R.; Limpen, J.L.; Houwen, R.H.; Fischer, K.; Benninga, M.A.
2010-01-01
Four scoring methods exist to assess severity of fecal loading on plain abdominal radiographs in constipated patients (Barr-, Starreveld-, Blethyn- and Leech). So far, the Starreveld score was used only in adult patients. To determine accuracy and intra- and inter-observer agreement of the Starrevel
What do educational test scores really measure?
McIntosh, James; D. Munk, Martin
measure of pure cognitive ability. We find that variables which are not closely associated with traditional notions of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture, attitudes...
Propensity score weighting with multilevel data.
Li, Fan; Zaslavsky, Alan M; Landrum, Mary Beth
2013-08-30
Propensity score methods are being increasingly used as a less parametric alternative to traditional regression to balance observed differences across groups in both descriptive and causal comparisons. Data collected in many disciplines often have analytically relevant multilevel or clustered structure. The propensity score, however, was developed and has been used primarily with unstructured data. We present and compare several propensity-score-weighted estimators for clustered data, including marginal, cluster-weighted, and doubly robust estimators. Using both analytical derivations and Monte Carlo simulations, we illustrate bias arising when the usual assumptions of propensity score analysis do not hold for multilevel data. We show that exploiting the multilevel structure, either parametrically or nonparametrically, in at least one stage of the propensity score analysis can greatly reduce these biases. We applied these methods to a study of racial disparities in breast cancer screening among beneficiaries of Medicare health plans.
A Bayesian Approach to Learning Scoring Systems.
Ertekin, Şeyda; Rudin, Cynthia
2015-12-01
We present a Bayesian method for building scoring systems, which are linear models with coefficients that have very few significant digits. Usually the construction of scoring systems involve manual effort-humans invent the full scoring system without using data, or they choose how logistic regression coefficients should be scaled and rounded to produce a scoring system. These kinds of heuristics lead to suboptimal solutions. Our approach is different in that humans need only specify the prior over what the coefficients should look like, and the scoring system is learned from data. For this approach, we provide a Metropolis-Hastings sampler that tends to pull the coefficient values toward their "natural scale." Empirically, the proposed method achieves a high degree of interpretability of the models while maintaining competitive generalization performances.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Singla, Anand; Singla, Satpaul; Singh, Mohinder; Singla, Deeksha
2016-12-01
Acute appendicitis is a common but elusive surgical condition and remains a diagnostic dilemma. It has many clinical mimickers and diagnosis is primarily made on clinical grounds, leading to the evolution of clinical scoring systems for pin pointing the right diagnosis. The modified Alvarado and RIPASA scoring systems are two important scoring systems, for diagnosis of acute appendicitis. We prospectively compared the two scoring systems for diagnosing acute appendicitis in 50 patients presenting with right iliac fossa pain. The RIPASA score correctly classified 88 % of patients with histologically confirmed acute appendicitis compared with 48.0 % with modified Alvarado score, indicating that RIPASA score is more superior to Modified Alvarado score in our clinical settings.
THE EFFICIENCY OF TENNIS DOUBLES SCORING SYSTEMS
Geoff Pollard
2010-09-01
Full Text Available In this paper a family of scoring systems for tennis doubles for testing the hypothesis that pair A is better than pair B versus the alternative hypothesis that pair B is better than A, is established. This family or benchmark of scoring systems can be used as a benchmark against which the efficiency of any doubles scoring system can be assessed. Thus, the formula for the efficiency of any doubles scoring system is derived. As in tennis singles, one scoring system based on the play-the-loser structure is shown to be more efficient than the benchmark systems. An expression for the relative efficiency of two doubles scoring systems is derived. Thus, the relative efficiency of the various scoring systems presently used in doubles can be assessed. The methods of this paper can be extended to a match between two teams of 2, 4, 8, …doubles pairs, so that it is possible to establish a measure for the relative efficiency of the various systems used for tennis contests between teams of players.
Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data
Jacqueline L. Gauer
2016-09-01
Full Text Available Introduction: The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE Step 1 and Step 2 Clinical Knowledge (CK scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams. Method: Multiple linear regression, correlation, and chi-square analyses were performed to determine the relationship between MCAT component and composite scores and USMLE Step 1 and Step 2 CK scores from five graduating classes (2011–2015 at the University of Minnesota Medical School (N=1,065. Results: The multiple linear regression analyses were both significant (p<0.001. The three MCAT component scores together explained 17.7% of the variance in Step 1 scores (p<0.001 and 12.0% of the variance in Step 2 CK scores (p<0.001. In the chi-square analyses, significant, albeit weak associations were observed between almost all MCAT component scores and USMLE scores (Cramer's V ranged from 0.05 to 0.24. Discussion: Each of the MCAT component scores was significantly associated with USMLE Step 1 and Step 2 CK scores, although the effect size was small. Being in the top or bottom scoring range of the MCAT exam was predictive of being in the top or bottom scoring range of the USMLE exams, although the strengths of the associations were weak to moderate. These results indicate that MCAT scores are predictive of student performance on the USMLE exams, but, given the small effect sizes, should be considered as part of the holistic view of the student.
Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data
Gauer, Jacqueline L.; Wolff, Josephine M.; Jackson, J. Brooks
2016-01-01
Introduction: The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT) component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams.Method: Multiple linear regression, correlation, and chi-square analyses were performed to determi...
Akai, Takanori; Taniguchi, Daigo; Oda, Ryo; Asada, Maki; Toyama, Shogo; Tokunaga, Daisaku; Seno, Takahiro; Kawahito, Yutaka; Fujii, Yosuke; Ito, Hirotoshi; Fujiwara, Hiroyoshi; Kubo, Toshikazu
2016-04-01
Contrast-enhanced magnetic resonance imaging with maximum intensity projection (MRI-MIP) is an easy, useful imaging method to evaluate synovitis in rheumatoid hands. However, the prognosis of synovitis-positive joints on MRI-MIP has not been clarified. The aim of this study was to evaluate the relationship between synovitis visualized by MRI-MIP and joint destruction on X-rays in rheumatoid hands. The wrists, metacarpophalangeal (MP) joints, and proximal interphalangeal (PIP) joints of both hands (500 joints in total) were evaluated in 25 rheumatoid arthritis (RA) patients. Synovitis was scored from grade 0 to 2 on the MRI-MIP images. The Sharp/van der Heijde score and Larsen grade were used for radiographic evaluation. The relationships between the MIP score and the progression of radiographic scores and between the MIP score and bone marrow edema on MRI were analyzed using the trend test. As the MIP score increased, the Sharp/van der Heijde score and Larsen grade progressed severely. The rate of bone marrow edema-positive joints also increased with higher MIP scores. MRI-MIP imaging of RA hands is a clinically useful method that allows semi-quantitative evaluation of synovitis with ease and can be used to predict joint destruction.
Kernel score statistic for dependent data.
Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike
2014-01-01
The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.
Powers, Donald; Schedl, Mary; Papageorgiou, Spiros
2017-01-01
The aim of this study was to develop, for the benefit of both test takers and test score users, enhanced "TOEFL ITP"® test score reports that go beyond the simple numerical scores that are currently reported. To do so, we applied traditional scale anchoring (proficiency scaling) to item difficulty data in order to develop performance…
李仁杰; 路紫
2011-01-01
The virtual expression for theme park, using the virtual reality (VR) technology, has achieved not only for achieving a high degree of realism for every element of the landscape in appearance texture, but also for showing landscape construction, landscape evolution and the man-land relationship and describing the geo-spatial pattern. The latter is paid less attention by researchers. Designing the landscape model based on semantic features in Virtual Geographic Environments (VGE) is a good idea to achieve the two-level modeling, and a case of a water and soil conservation technology park, located in the Yanqing County, Beijing, China, is selected to demonstrate the idea. The water and soil conservation technology park is a special form of theme park, which is designed for the experiments of water and soil conservation technology, popular science education of protection of ecological environment, and leisure and recreation activities. But the park' s functions are greatly restricted by the park's area, location and ecological capacity. So it cannot satisfy the multi functional needs for the education of ecological environment protection, technology demonstration and ecotourism development. The authors design a classification system of themes and virtual objects in the water and soil conservation technology park, and build a layer of details (LOD) model for describing the theme park in the computer virtual environment based on semantic context of the ecological landscape. The LOD model can show the features and landscapes of the theme park at different view scales such as the whole view, middle scale view, and some special partial views,even a special feature view in the virtual environment. The LOD model can also construct the virtual environment based on different themes and functions or design a special sight-seeing route by the describing of different scale LOD models and other landscape features together. This case study is done in the ArcGIS 9.2 and the Skyline
A new simple score (ABS) for assessing behavioral and psychological symptoms of dementia.
Abe, K; Yamashita, T; Hishikawa, N; Ohta, Y; Deguchi, K; Sato, K; Matsuzono, K; Nakano, Y; Ikeda, Y; Wakutani, Y; Takao, Y
2015-03-15
In addition to cognitive impairment, behavioral and psychological symptoms of dementia (BPSD) are another important aspect of most dementia patients. This study was designed for a new simple assessment of BPSD. We first employed a clinical survey for the local community with sending an inquiry letter to all members (n=129) of dementia caregiver society, and then attempted to create a new BPSD score for dementia with 10 BPSD items. This new simple BPSD score was compared to a standard-detailed BPSD score neuropsychiatric inventory (NPI) for a possible correlation (n=792) and a time to complete (n=136). Inter-rater reliability was examined comparing scores between main and second caregivers (n=70) for AD. Based on the clinical survey for local caregivers, a new BPSD score for dementia (ABS, Abe's BPSD score) was newly created, in which each BPSD item was allotted by an already-weighted score (maximum 1-9) based on the frequency and severity, and was finalized with taking temporal occurrences into account. ABS was filled by the main caregiver with a full score of 44, was well correlated with NPI (r=0.716, **pABS in secondary than the main caregivers. ABS provides a new simple and quick test for BPSD assessment, with a good correlation to NPI but a shorter time, and with a high inter-rater reliability. Thus ABS is useful for evaluating BPSD for mild to moderate dementia patients.
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
GMAT Scores of Undergraduate Economics Majors
Nelson, Paul A.; Monson, Terry D.
2008-01-01
The average score of economics majors on the Graduate Management Admission Test (GMAT) exceeds those of nearly all humanities and arts, social sciences, and business undergraduate majors but not those of most science, engineering, and mathematics majors. (Contains 1 table.)
GMAT Scores of Undergraduate Economics Majors
Nelson, Paul A.; Monson, Terry D.
2008-01-01
The average score of economics majors on the Graduate Management Admission Test (GMAT) exceeds those of nearly all humanities and arts, social sciences, and business undergraduate majors but not those of most science, engineering, and mathematics majors. (Contains 1 table.)
Surgical Apgar Score Predicts Postoperative Complications in ...
neurotrauma patients by using an effective scoring system can reduce ... complications was 7.04 while for patients with complications was ... their SAS for purposes of risk stratification; high risk. (0-4), medium .... Deep Venous. Thrombosis. 0.
Multifactor Screener in OPEN: Scoring Procedures & Results
Scoring procedures were developed to convert a respondent's screener responses to estimates of individual dietary intake for percentage energy from fat, grams of fiber, and servings of fruits and vegetables.
Film scoring today - Theory, practice and analysis
Flach, Paula Sophie
2012-01-01
This thesis considers film scoring by taking a closer look at the theoretical discourse throughout the last decades, examining current production practice of film music and showcasing a musical analysis of the film Inception (2010).
Knee Injury and Osteoarthritis Outcome Score (KOOS)
Collins, N J; Prinsen, C A C; Christensen, R
2016-01-01
OBJECTIVE: To conduct a systematic review and meta-analysis to synthesize evidence regarding measurement properties of the Knee injury and Osteoarthritis Outcome Score (KOOS). DESIGN: A comprehensive literature search identified 37 eligible papers evaluating KOOS measurement properties in partici...
Use score card to boost quality.
2002-10-01
Keeping a score card can identify problem areas and track improvements. When specific goals are reached, staff are given rewards such as thank-you letters, tokens, or pizza parties. Staff are kept informed about the results of the score card through bulletin board postings, staff meetings, and the hospital Intranet. Data are collected with manual entry by nursing staff, chart review by performance improvement, and a computerized program.
Desensitizing efficacy of Colgate Sensitive Maximum Strength and Fresh Mint Sensodyne dentifrices.
Sowinski, J A; Bonta, Y; Battista, G W; Petrone, D; DeVizio, W; Petrone, M; Proskin, H M
2000-06-01
To investigate the relative effectiveness provided by a new dentifrice containing 5.0% potassium nitrate and 0.454% stannous fluoride in a silica base (Colgate Sensitive Maximum Strength dentifrice) for reducing dentin hypersensitivity over an 8-wk period, as compared to that provided by a commercially-available antihypersensitivity dentifrice containing 5.0% potassium nitrate and 0.76% sodium monofluorophosphate in a dicalcium phosphate base (Fresh Mint Sensodyne dentifrice). To qualify for participation in this examiner-blind clinical study, male and female adults from the central New Jersey area were required to present with tactile and air blast dentin hypersensitivity in at least two non-molar teeth at two examinations, spaced 1 wk apart. Qualifying subjects were randomized into two treatment groups, which were balanced for gender, age, and baseline sensitivity scores. Subjects were provided with a soft-bristled toothbrush. Examinations for tactile and air blast sensitivity were repeated after 4 wks' use of the study dentifrices, and again after 8 wks' usage. 97 subjects complied with the protocol, and completed the entire study. After 4 wks, subjects assigned to the Colgate Sensitive Maximum Strength dentifrice group exhibited a statistically significant improvement over the Sensodyne dentifrice group with respect to tactile sensitivity scores, and a statistically significant improvement over the Sensodyne dentifrice group with respect to air blast sensitivity scores. Correspondingly significant improvements were presented after 8 wks. Thus, the results of this examiner-blind clinical study support the conclusion that the Colgate Sensitive Maximum Strength dentifrice containing 5.0% potassium nitrate and 0.454% stannous fluoride in a silica base provided superior levels of control of tactile and air blast sensitivity than the clinically tested, commercially-available anti-hypersensitivity dentifrice Sensodyne dentifrice containing 5.0% potassium nitrate and 0
Self-reported maximum walking distance in persons with MS may affect the EDSS.
Berger, Warren; Payne, Michael W C; Morrow, Sarah A
2017-08-15
In persons with MS (PwMS), the Expanded Disability Status Scale (EDSS) is used to monitor disability progression. Scores between 4.0 and 7.0 are determined by maximum walking distance. Self-estimation of this value is often employed in clinic settings. To examine the accuracy with which PwMS estimate their walking distance, and observe subsequent changes to the EDSS. This prospective cohort study recruited PwMS with previously recorded EDSS of 3.5-7.0. Participants estimated their maximum walking distance and then walked as far as they could along a pre-specified course. Each distance was converted to an EDSS score, the "estimated EDSS" and the "actual EDSS". Chi-Square analysis was used to compare EDSS scores. Logistic regression was used to determine predictors of inaccurate estimations. Of the 66 PwMS in this study, 43.9% had a difference in the actual EDSS compared to the estimated EDSS. Median estimated EDSS was 4.75 (range 3.0-7.0); after walking assessment, median actual EDSS was 5.0 (range 3.0-7.0), which represented a significant difference [X(2) (df 64, N=66)=206.9; pEDSS decreased in 9 PwMS (13.6%) and increased in 20 PwMS (30.3%). Logistic regression did not find any demographic/disease characteristic to be predictive of this discrepancy. Some PwMS do not accurately estimate maximum walking distance; only 56.1% of PwMS accurately estimated their actual EDSS. Copyright © 2017 Elsevier B.V. All rights reserved.
Pharmacophore-based similarity scoring for DOCK.
Jiang, Lingling; Rizzo, Robert C
2015-01-22
Pharmacophore modeling incorporates geometric and chemical features of known inhibitors and/or targeted binding sites to rationally identify and design new drug leads. In this study, we have encoded a three-dimensional pharmacophore matching similarity (FMS) scoring function into the structure-based design program DOCK. Validation and characterization of the method are presented through pose reproduction, crossdocking, and enrichment studies. When used alone, FMS scoring dramatically improves pose reproduction success to 93.5% (∼20% increase) and reduces sampling failures to 3.7% (∼6% drop) compared to the standard energy score (SGE) across 1043 protein-ligand complexes. The combined FMS+SGE function further improves success to 98.3%. Crossdocking experiments using FMS and FMS+SGE scoring, for six diverse protein families, similarly showed improvements in success, provided proper pharmacophore references are employed. For enrichment, incorporating pharmacophores during sampling and scoring, in most cases, also yield improved outcomes when docking and rank-ordering libraries of known actives and decoys to 15 systems. Retrospective analyses of virtual screenings to three clinical drug targets (EGFR, IGF-1R, and HIVgp41) using X-ray structures of known inhibitors as pharmacophore references are also reported, including a customized FMS scoring protocol to bias on selected regions in the reference. Overall, the results and fundamental insights gained from this study should benefit the docking community in general, particularly researchers using the new FMS method to guide computational drug discovery with DOCK.
Lemaine, Valerie; Hoskin, Tanya L; Farley, David R; Grant, Clive S; Boughey, Judy C; Torstenson, Tiffany A; Jacobson, Steven R; Jakub, James W; Degnim, Amy C
2015-09-01
With increasing use of immediate breast reconstruction (IBR), mastectomy skin flap necrosis (MSFN) is a clinical problem that deserves further study. We propose a validated scoring system to discriminate MSFN severity and standardize its assessment. Women who underwent skin-sparing (SSM) or nipple-sparing mastectomy (NSM) and IBR from November 2009 to October 2010 were studied retrospectively. A workgroup of breast and plastic surgeons scored postoperative photographs using the skin ischemia necrosis (SKIN) score to assess depth and surface area of MSFN. We evaluated correlation of the SKIN score with reoperation for MSFN and its reproducibility in an external sample of surgeons. We identified 106 subjects (175 operated breasts: 103 SSM, 72 NSM) who had ≥1 postoperative photograph within 60 days. SKIN scores correlated strongly with need for reoperation for MSFN, with an AUC of 0.96 for SSM and 0.89 for NSM. External scores agreed well with the gold standard scores for the breast mound photographs with weighted kappa values of 0.82 (depth), 0.56 (surface area), and 0.79 (composite score). The agreement was similar for the nipple-areolar complex photographs: 0.75 (depth), 0.63 (surface area), and 0.79 (composite score). A simple scoring system to assess the severity of MSFN is proposed, incorporating both depth and surface area of MSFN. The SKIN score correlates strongly with the need for reoperation to manage MSFN and is reproducible among breast and plastic surgeons.
GalaxyDock BP2 score: a hybrid scoring function for accurate protein-ligand docking
Baek, Minkyung; Shin, Woong-Hee; Chung, Hwan Won; Seok, Chaok
2017-07-01
Protein-ligand docking is a useful tool for providing atomic-level understanding of protein functions in nature and design principles for artificial ligands or proteins with desired properties. The ability to identify the true binding pose of a ligand to a target protein among numerous possible candidate poses is an essential requirement for successful protein-ligand docking. Many previously developed docking scoring functions were trained to reproduce experimental binding affinities and were also used for scoring binding poses. However, in this study, we developed a new docking scoring function, called GalaxyDock BP2 Score, by directly training the scoring power of binding poses. This function is a hybrid of physics-based, empirical, and knowledge-based score terms that are balanced to strengthen the advantages of each component. The performance of the new scoring function exhibits significant improvement over existing scoring functions in decoy pose discrimination tests. In addition, when the score is used with the GalaxyDock2 protein-ligand docking program, it outperformed other state-of-the-art docking programs in docking tests on the Astex diverse set, the Cross2009 benchmark set, and the Astex non-native set. GalaxyDock BP2 Score and GalaxyDock2 with this score are freely available at http://galaxy.seoklab.org/softwares/galaxydock.html.
Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S
2015-01-01
Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.
A MAXIMUM ENTROPY CHUNKING MODEL WITH N-FOLD TEMPLATE CORRECTION
无
2007-01-01
This letter presents a new chunking method based on Maximum Entropy (ME) model with N-fold template correction model. First two types of machine learning models are described. Based on the analysis of the two models, then the chunking model which combines the profits of conditional probability model and rule based model is proposed. The selection of features and rule templates in the chunking model is discussed. Experimental results for the CoNLL-2000 corpus show that this approach achieves impressive accuracy in terms of the F-score: 92.93%. Compared with the ME model and ME Markov model, the new chunking model achieves better performance.
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Diet Quality Scores of Australian Adults Who Have Completed the Healthy Eating Quiz.
Williams, Rebecca L; Rollo, Megan E; Schumacher, Tracy; Collins, Clare E
2017-08-15
Higher scores obtained using diet quality and variety indices are indicators of more optimal food and nutrient intakes and lower chronic disease risk. The aim of this paper is to describe the overall diet quality and variety in a sample of Australian adults who completed an online diet quality self-assessment tool, the Healthy Eating Quiz. The Healthy Eating Quiz takes approximately five minutes to complete online and computes user responses into a total diet quality score (out of a maximum of 73 points) and then categorizes them into the following groups: 'needs work' (Healthy eating quiz scores were higher in those aged 45-75 years compared to 16-44 years (p Healthy Eating Quiz data indicates that individuals receiving feedback on how to improve their score can improve their diet quality, there is a need for further nutrition promotion interventions in Australian adults.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Heart valve surgery: EuroSCORE vs. EuroSCORE II vs. Society of Thoracic Surgeons score
Muhammad Sharoz Rabbani
2014-12-01
Full Text Available Background This is a validation study comparing the European System for Cardiac Operative Risk Evaluation (EuroSCORE II with the previous additive (AES and logistic EuroSCORE (LES and the Society of Thoracic Surgeons’ (STS risk prediction algorithm, for patients undergoing valve replacement with or without bypass in Pakistan. Patients and Methods Clinical data of 576 patients undergoing valve replacement surgery between 2006 and 2013 were retrospectively collected and individual expected risks of death were calculated by all four risk prediction algorithms. Performance of these risk algorithms was evaluated in terms of discrimination and calibration. Results There were 28 deaths (4.8% among 576 patients, which was lower than the predicted mortality of 5.16%, 6.96% and 4.94% by AES, LES and EuroSCORE II but was higher than 2.13% predicted by STS scoring system. For single and double valve replacement procedures, EuroSCORE II was the best predictor of mortality with highest Hosmer and Lemmeshow test (H-L p value (0.346 to 0.689 and area under the receiver operating characteristic (ROC curve (0.637 to 0.898. For valve plus concomitant coronary artery bypass grafting (CABG patients actual mortality was 1.88%. STS calculator came out to be the best predictor of mortality for this subgroup with H-L p value (0.480 to 0.884 and ROC (0.657 to 0.775. Conclusions For Pakistani population EuroSCORE II is an accurate predictor for individual operative risk in patients undergoing isolated valve surgery, whereas STS performs better in the valve plus CABG group.
RISK FACTOR DIAGNOSTIC SCORE IN DIABETIC FOOT
Mohamed Shameem P. M
2016-09-01
Full Text Available INTRODUCTION Diabetic foot ulcers vary in their clinical presentation and nature of severity and therefore create a challenging problem to the treating surgeon regarding the prediction of the clinical course and the end result of the treatment. Clinical studies have shown that there are certain risk factors for the progression of foot ulcers in diabetics and it may therefore be possible to predict the course of an ulcer foot at presentation itself, thus instituting proper therapy without delay. Spoken otherwise clinical scoring may tell that this particular ulcer is having highest chance of amputation, then one may be able to take an early decision for the same and avoid the septic complications, inconvenience to the patient, long hospital stay and cost of treatments. AIM OF THE STUDY Aim of the study is to evaluate the above-mentioned scoring system in predicting the course the diabetic foot ulcers. MATERIALS AND METHODS 50 patients with Diabetic Foot attending the OPD of Department of Surgery of Government Hospital attached to Calicut Medical College are included in the present study. After thorough history taking and clinical examination, six risk factors like Age, pedal vessels, renal function, neuropathy, radiological findings and ulcers were observed in the patients by giving certain scoring points to each of them. The total number of points scored by the patients at the time of admission or OPD treatment was correlated with the final outcome in these patients, whether leading to amputation or conservative management. All the data was analysed using standard statistical methods. OBSERVATIONS AND RESULTS There were 12 females and 38 males with a female to male ratio 1:3.1. All were aged above 30 years. Twenty-four (48% of them were between 30-60 years and twenty six (52% were above 60 years. 10 patients were treated conservatively with risk score range: 10 to 35. Six had single toe loss with risk score: 25 to 35. Six had multiple toe loss
A scoring framework for predicting protein structures
Zou, Xiaoqin
2013-03-01
We have developed a statistical mechanics-based iterative method to extract statistical atomic interaction potentials from known, non-redundant protein structures. Our method circumvents the long-standing reference state problem in deriving traditional knowledge-based scoring functions, by using rapid iterations through a physical, global convergence function. The rapid convergence of this physics-based method, unlike other parameter optimization methods, warrants the feasibility of deriving distance-dependent, all-atom statistical potentials to keep the scoring accuracy. The derived potentials, referred to as ITScore/Pro, have been validated using three diverse benchmarks: the high-resolution decoy set, the AMBER benchmark decoy set, and the CASP8 decoy set. Significant improvement in performance has been achieved. Finally, comparisons between the potentials of our model and potentials of a knowledge-based scoring function with a randomized reference state have revealed the reason for the better performance of our scoring function, which could provide useful insight into the development of other physical scoring functions. The potentials developed in the present study are generally applicable for structural selection in protein structure prediction.
SCORE SETS IN ORIENTED 3-PARTITE GRAPHS
无
2007-01-01
Let D(U, V, W) be an oriented 3-partite graph with |U|=p, |V|=q and |W|= r. For any vertex x in D(U, V, W), let d+x and d-x be the outdegree and indegree of x respectively. Define aui (or simply ai) = q + r + d+ui - d-ui, bvj(or simply bj) = p + r + d+vj - d-vj and Cwk (or simply ck) = p + q + d+wk - d-wk as the scores of ui in U, vj in V and wk in Wrespectively. The set A of distinct scores of the vertices of D(U, V, W) is called its score set. In this paper, we prove that if a1 is a non-negative integer, ai(2≤i≤n - 1) are even positive integers and an is any positive integer, then for n≥3, there exists an oriented 3-partite graph with the score set A = {a1,2∑i=1 ai,…,n∑i=1 ai}, except when A = {0,2,3}. Some more results for score sets in oriented 3-partite graphs are obtained.
Disease severity scoring systems in dermatology
Cemal Bilaç
2016-06-01
Full Text Available Scoring systems have been developed to interpret the disease severity objectively by evaluating the parameters of the disease. Body surface area, visual analogue scale, and physician global assessment are the most frequently used scoring systems for evaluating the clinical severity of the dermatological diseases. Apart from these scoring systems, many specific scoring systems for many dermatological diseases, including acne (acne vulgaris, acne scars, alopecia (androgenetic alopecia, tractional alopecia, bullous diseases (autoimmune bullous diseases, toxic epidermal necrolysis, dermatitis (atopic dermatitis, contact dermatitis, dyshidrotic eczema, hidradenitis suppurativa, hirsutismus, connective tissue diseases (dermatomyositis, skin involvement of systemic lupus erythematosus (LE, discoid LE, scleroderma, lichen planoplaris, mastocytosis, melanocytic lesions, melasma, onychomycosis, oral lichen planus, pityriasis rosea, psoriasis (psoriasis vulgaris, psoriatic arthritis, nail psoriasis, sarcoidosis, urticaria, and vitiligo, have also been developed. Disease severity scoring methods are ever more extensively used in the field of dermatology for clinical practice to form an opinion about the prognosis by determining the disease severity; to decide on the most suitable treatment modality for the patient; to evaluate the efficacy of the applied medication; and to compare the efficiency of different treatment methods in clinical studies.
Gambling scores for earthquake predictions and forecasts
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
[Overview of regulatory aspects guiding tablet scoring].
Teixeira, Maíra Teles; Sá-Barreto, Lívia Cristina Lira; Silva, Dayde Lane Mendonça; Cunha-Filho, Marcílio Sergio Soares
2016-06-01
Tablet scoring is a controversial but common practice used to adjust doses, facilitate drug intake, or lower the cost of drug treatment, especially in children and the elderly. The risks of tablet scoring are mainly related to inaccuracies in the resulting dose and stability problems. The aim of this article is to provide an overview of worldwide guidelines regarding tablet scoring. We found that regulatory health agencies in Mercosur countries as well as other South American countries do not have published standards addressing tablet splitting. Among the surveyed health agencies, the Food and Drug Administration (FDA) in the United States is the only one to present standards, ranging from splitting instructions to regulation of the manufacturing process. The concept of functional scoring implemented by the FDA has introduced some level of guarantee as to the ability of tablets to be split. In conclusion, technical and scientific bases are still insufficient to guide health rules on this subject, making the decision on scoring, in certain situations, random and highly risky to public health. The need for more detailed regulation is vital to ensure the safety of tablet medications.
Doebeli, A; Michel, E; Bettschart, R; Hartnack, S; Reichler, I M
2013-11-01
The effects of alfaxalone and propofol on neonatal vitality were studied in 22 bitches and 81 puppies after their use as anesthetic induction agents for emergency cesarean section. After assessment that surgery was indicated, bitches were randomly allocated to receive alfaxalone 1 to 2 mg/kg body weight or propofol 2 to 6 mg/kg body weight for anesthetic induction. Both drugs were administered intravenously to effect to allow endotracheal intubation, and anesthesia was maintained with isoflurane in oxygen. Neonatal vitality was assessed using a modified Apgar score that took into account heart rate, respiratory effort, reflex irritability, motility, and mucous membrane color (maximum score = 10); scores were assigned at 5, 15, and 60 minutes after delivery. Neither the number of puppies delivered nor the proportion of surviving puppies up to 3 months after delivery differed between groups. Anesthetic induction drug and time of scoring were associated with the Apgar score, but delivery time was not. Apgar scores in the alfaxalone group were greater than those in the propofol group at 5, 15, and 60 minutes after delivery; the overall estimated score difference between the groups was 3.3 (confidence interval 95%: 1.6-4.9; P < 0.001). In conclusion, both alfaxalone and propofol can be safely used for induction of anesthesia in bitches undergoing emergency cesarean section. Although puppy survival was similar after the use of these drugs, alfaxalone was associated with better neonatal vitality during the first 60 minutes after delivery.
Sparks, T. H.; Huber, K.; Croxton, P. J.
2006-05-01
In 1944, John Willis produced a summary of his meticulous record keeping of weather and plants over the 30 years 1913 1942. This publication contains fixed-date, fixed-subject photography taken on the 1st of each month from January to May, using as subjects snowdrop Galanthus nivalis, daffodil Narcissus pseudo-narcissus, horse chestnut Aesculus hippocastanum and beech Fagus sylvatica. We asked 38 colleagues to assess rapidly the plant development in each of these photographs according to a supplied five-point score. The mean scores from this exercise were assessed in relation to mean monthly weather variables preceding the date of the photograph and the consistency of scoring was examined according to the experience of the recorders. Plant development was more strongly correlated with mean temperature than with minimum or maximum temperatures or sunshine. No significant correlations with rainfall were detected. Whilst mean scores were very similar, botanists were more consistent in their scoring of developmental stages than non-botanists. However, there was no overall pattern for senior staff to be more consistent in scoring than junior staff. These results suggest that scoring of plant development stages on fixed dates could be a viable method of assessing the progress of the season. We discuss whether such recording could be more efficient than traditional phenology, especially in those sites that are not visited regularly and hence are less amenable to frequent or continuous observation to assess when a plant reaches a particular growth stage.
Defining Emergency Department Necessary Policies Based on Clinical Governance Accreditation Scores
Mehrdad Esmailian
2015-05-01
Full Text Available Introduction: The role of accreditation scheme in quality improvement of emergency departments (ED has not been thoroughly evaluated in studies. Therefore, this study was designed to appraise the effects of policies defined based on clinical governance accreditation scores, on improvement of the procedures in ED. Methods: The present cohort study was carried out in the ED of Alzahra University Hospital, Isfahan, Iran in 2012-2013. In 2012 the deficiencies in ED of this hospital was determined based on clinical governance indicators. Then the deficiencies were classified based on their importance and changes were made in the ED. Finally, the effects of the changes were evaluated in August 2013. Results: The evaluation made in 2012 showed that 23 clinical and non-clinical procedures were carried out with deficiencies. Over the mentioned period, 6 (26.1% procedures were not done at all, while 17 (73.9% were done without a policy and irregularly. The overall score for clinical and non-clinical procedures in the ED before carrying out the accreditation scheme was 43 / 230 (18.7% of the maximum possible score. The score was raised to 222 equal to 96.5% of the maximum possible score after carrying out the scheme. This increase was statistically significant (p < 0.001. Conclusion: The findings of the present study showed that defining policies for improving the procedures carried out in ED based on accreditation scheme leads to improvement of medical services in ED.
Prognostic Value of TIMI Score versus GRACE Score in ST-segment Elevation Myocardial Infarction
Luis C. L. Correia
2014-08-01
Full Text Available Background: The TIMI Score for ST-segment elevation myocardial infarction (STEMI was created and validated specifically for this clinical scenario, while the GRACE score is generic to any type of acute coronary syndrome. Objective: Between TIMI and GRACE scores, identify the one of better prognostic performance in patients with STEMI. Methods: We included 152 individuals consecutively admitted for STEMI. The TIMI and GRACE scores were tested for their discriminatory ability (C-statistics and calibration (Hosmer-Lemeshow in relation to hospital death. Results: The TIMI score showed equal distribution of patients in the ranges of low, intermediate and high risk (39 %, 27 % and 34 %, respectively, as opposed to the GRACE Score that showed predominant distribution at low risk (80 %, 13 % and 7%, respectively. Case-fatality was 11%. The C-statistics of the TIMI score was 0.87 (95%CI = 0.76 to 0.98, similar to GRACE (0.87, 95%CI = 0.75 to 0.99 - p = 0.71. The TIMI score showed satisfactory calibration represented by χ2 = 1.4 (p = 0.92, well above the calibration of the GRACE score, which showed χ2 = 14 (p = 0.08. This calibration is reflected in the expected incidence ranges for low, intermediate and high risk, according to the TIMI score (0 %, 4.9 % and 25 %, respectively, differently to GRACE (2.4%, 25% and 73%, which featured middle range incidence inappropriately. Conclusion: Although the scores show similar discriminatory capacity for hospital death, the TIMI score had better calibration than GRACE. These findings need to be validated populations of different risk profiles.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Evaluation of the "medication fall risk score".
Yazdani, Cyrus; Hall, Scott
2017-01-01
Results of a study evaluating the predictive validity of a fall screening tool in hospitalized patients are reported. Administrative claims data from two hospitals were analyzed to determine the discriminatory ability of the "medication fall risk score" (RxFS), a medication review fall-risk screening tool that is designed for use in conjunction with nurse-administered tools such as the Morse Fall Scale (MFS). Through analysis of data on administered medications and documented falls in a population of adults who underwent fall-risk screening at hospital admission over a 15-month period (n = 33,058), the predictive value of admission MFS scores, alone or in combination with retrospectively calculated RxFS-based risk scores, was assessed. Receiver operating characteristic (ROC) curve analysis and net reclassification improvement (NRI) analysis were used to evaluate improvements in risk prediction with the addition of RxFS data to the prediction model. The area under the ROC curve for the predictive model for falls compromising both MFS and RxFS scores was computed as 0.8014, which was greater than the area under the ROC curve associated with use of the MFS alone (0.7823, p = 0.0030). Screening based on MFS scores alone had 81.25% sensitivity and 61.37% specificity. Combined use of RxFS and MFS scores resulted in 82.42% sensitivity and 66.65% specificity (NRI = 0.0587, p = 0.0003). Reclassification of fall risk based on coadministration of the MFS and the RxFS tools resulted in a modest improvement in specificity without compromising sensitivity. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
NCACO-score: An effective main-chain dependent scoring function for structure modeling
Dong Xiaoxi
2011-05-01
Full Text Available Abstract Background Development of effective scoring functions is a critical component to the success of protein structure modeling. Previously, many efforts have been dedicated to the development of scoring functions. Despite these efforts, development of an effective scoring function that can achieve both good accuracy and fast speed still presents a grand challenge. Results Based on a coarse-grained representation of a protein structure by using only four main-chain atoms: N, Cα, C and O, we develop a knowledge-based scoring function, called NCACO-score, that integrates different structural information to rapidly model protein structure from sequence. In testing on the Decoys'R'Us sets, we found that NCACO-score can effectively recognize native conformers from their decoys. Furthermore, we demonstrate that NCACO-score can effectively guide fragment assembly for protein structure prediction, which has achieved a good performance in building the structure models for hard targets from CASP8 in terms of both accuracy and speed. Conclusions Although NCACO-score is developed based on a coarse-grained model, it is able to discriminate native conformers from decoy conformers with high accuracy. NCACO is a very effective scoring function for structure modeling.
What Do Test Scores Really Mean? A Latent Class Analysis of Danish Test Score Performance
Munk, Martin D.; McIntosh, James
2014-01-01
Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55, tested in 1968, and followed until 2011. The procedure takes account of unobservable effects as well as excessive zeros in the data. We show that the test scores...... of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture and possible incentive problems make it more di¢ cult to understand what the tests measure....
Vinardo: A Scoring Function Based on Autodock Vina Improves Scoring, Docking, and Virtual Screening.
Rodrigo Quiroga
Full Text Available Autodock Vina is a very popular, and highly cited, open source docking program. Here we present a scoring function which we call Vinardo (Vina RaDii Optimized. Vinardo is based on Vina, and was trained through a novel approach, on state of the art datasets. We show that the traditional approach to train empirical scoring functions, using linear regression to optimize the correlation of predicted and experimental binding affinities, does not result in a function with optimal docking capabilities. On the other hand, a combination of scoring, minimization, and re-docking on carefully curated training datasets allowed us to develop a simplified scoring function with optimum docking performance. This article provides an overview of the development of the Vinardo scoring function, highlights its differences with Vina, and compares the performance of the two scoring functions in scoring, docking and virtual screening applications. Vinardo outperforms Vina in all tests performed, for all datasets analyzed. The Vinardo scoring function is available as an option within Smina, a fork of Vina, which is freely available under the GNU Public License v2.0 from http://smina.sf.net. Precompiled binaries, source code, documentation and a tutorial for using Smina to run the Vinardo scoring function are available at the same address.
Vinardo: A Scoring Function Based on Autodock Vina Improves Scoring, Docking, and Virtual Screening.
Quiroga, Rodrigo; Villarreal, Marcos A
2016-01-01
Autodock Vina is a very popular, and highly cited, open source docking program. Here we present a scoring function which we call Vinardo (Vina RaDii Optimized). Vinardo is based on Vina, and was trained through a novel approach, on state of the art datasets. We show that the traditional approach to train empirical scoring functions, using linear regression to optimize the correlation of predicted and experimental binding affinities, does not result in a function with optimal docking capabilities. On the other hand, a combination of scoring, minimization, and re-docking on carefully curated training datasets allowed us to develop a simplified scoring function with optimum docking performance. This article provides an overview of the development of the Vinardo scoring function, highlights its differences with Vina, and compares the performance of the two scoring functions in scoring, docking and virtual screening applications. Vinardo outperforms Vina in all tests performed, for all datasets analyzed. The Vinardo scoring function is available as an option within Smina, a fork of Vina, which is freely available under the GNU Public License v2.0 from http://smina.sf.net. Precompiled binaries, source code, documentation and a tutorial for using Smina to run the Vinardo scoring function are available at the same address.
Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Track score processing of multiple dissimilar sensors
Patsikas, Dimitrios
2007-01-01
In this thesis, a data fusion problem when a number of different types of sensors are deployed in the vicinity of a ballistic missile launch is studied. An objective of this thesis is to calculate a scoring function for each sensor track, and the track file with the best (optimum) track score can then be used for guiding an interceptor to the threat within the boost phase. Seven active ground-based radars, two space-based passive infrared sensors and two active light detection and rangin...
Assigning Numerical Scores to Linguistic Expressions
María Jesús Campión
2017-07-01
Full Text Available In this paper, we study different methods of scoring linguistic expressions defined on a finite set, in the search for a linear order that ranks all those possible expressions. Among them, particular attention is paid to the canonical extension, and its representability through distances in a graph plus some suitable penalization of imprecision. The relationship between this setting and the classical problems of numerical representability of orderings, as well as extension of orderings from a set to a superset is also explored. Finally, aggregation procedures of qualitative rankings and scorings are also analyzed.
A lumbar disc surgery predictive score card.
Finneson, B E
1978-06-01
A lumbar disc surgery predictive score card or questionnaire has been developed to assess potential candidates for excision of a herniated lumbar disc who have not previously undergone lumbar spine surgery. It is not designed to encompass patients who are being considered for other types of lumbar spine surgery, such as decompressive laminectomy or fusion. In an effort to make the "score card" usable by almost all physicians who are involved in lumbar disc surgery, only studies which have broad acceptance and are generally employed are included. Studies which have less widespread use such as electromyogram, discogram, venogram, special psychologic studies (MMPI, pain drawings) have been purposely excluded.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
[Intraoperative crisis and surgical Apgar score].
Oshiro, Masakatsu; Sugahara, Kazuhiro
2014-03-01
Intraoperative crisis is an inevitable event to anesthesiologists. The crisis requires effective and coordinated management once it happened but it is difficult to manage the crises properly under extreme stressful situation. Recently, it is reported that the use of surgical crisis checklists is associated with significant improvement in the management of operating-room crises in a high-fidelity simulation study. Careful preoperative evaluation, proper intraoperative management and using intraoperative crisis checklists will be needed for safer perioperative care in the future. Postoperative complication is a serious public health problem. It reduces the quality of life of patients and raises medical cost. Careful management of surgical patients is required according to their postoperative condition for preventing postoperative complications. A 10-point surgical Apgar score, calculated from intraoperative estimated blood loss, lowest mean arterial pressure, and lowest heart rate, is a simple and available scoring system for predicting postoperative complications. It undoubtedly predicts higher than average risk of postoperative complications and death within 30 days of surgery. Surgical Apgar score is a bridge between proper intraoperative and postoperative care. Anesthesiologists should make effort to reduce the postoperative complication and this score is a tool for it.
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Progress scored in forest pest studies
无
2007-01-01
@@ Teaming up with co-workers from State Forestry Administration (SFA), researchers of the CAS Institute of Zoology (IOZ)have scored encouraging progress in their studies of pheromones-based technology against the red turpentine beetle (Dendroctonus valens LeConte).
Stability of WISC-IV process scores.
Ryan, Joseph J; Umfleet, Laura Glass; Kane, Alexa
2013-01-01
Forty-three students were administered on two occasions approximately 11 months apart the complete Wechsler Intelligence Scale for Children-Fourth Edition, including the seven process components of Block Design No Time Bonus, Digit Span Forward (DSF), Digit Span Backward (DSB), Cancellation Random (CAR), Cancellation Structured (CAS), Longest Digit Span Forward (LDSF), and Longest Digit Span Backward (LDSB). Mean ages at first and second testing were 7.77 years (SD = 1.91) and 8.74 years (SD = 1.93), respectively. Mean Full-Scale IQ at initial testing was 111.63 (SD = 10.71). Process score stability coefficients ranged from .75 on DSF to .32 on CAS. Discrepancy score stabilities ranged from .45 on DSF minus DSB to .05 on CAS minus CAR. Approximately 21% of participants increased their LDSF on retest, and 16.3% showed a gain on LDSB. Caution must be exercised when interpreting process scores, and interpretation of discrepancy scores should probably be avoided.
What do educational test scores really measure?
McIntosh, James; D. Munk, Martin
Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55 and tested in 1968. The procedure takes account of unobservable effects as well as excessive zeros in the data. The bulk of unobservable effects are uncorrelate...
The FAt Spondyloarthritis Spine Score (FASSS)
Pedersen, Susanne Juhl; Zhao, Zheng; Lambert, Robert Gw
2013-01-01
Studies have shown that fat lesions follow resolution of inflammation in the spine of patients with axial spondyloarthritis (SpA). Fat lesions at vertebral corners have also been shown to predict development of new syndesmophytes. Therefore, scoring of fat lesions in the spine may constitute both...
Critical Thinking: More than Test Scores
Smith, Vernon G.; Szymanski, Antonia
2013-01-01
This article is for practicing or aspiring school administrators. The demand for excellence in public education has lead to an emphasis on standardized test scores. This article explores the development of a professional enhancement program designed to prepare teachers to teach higher order thinking skills. Higher order thinking is the primary…
Writing Plan Quality: Relevance to Writing Scores
Chai, Constance
2006-01-01
If writing matters, how can we improve it? This study investigated the nature of writing plan quality and its relationship to the ensuing writing scores. Data were drawn from the 1998 Provincial Learning Assessment Programme (PLAP) in Writing, which was administered to pupils in Grades 4, 7, and 10 across British Columbia, Canada. Common features…
Farneti, D; Fattori, B; Nacci, A; Mancini, V; Simonelli, M; Ruoppolo, G; Genovese, E
2014-04-01
This study evaluated the intra- and inter-rater reliability of the Pooling score (P-score) in clinical endoscopic evaluation of severity of swallowing disorder, considering excess residue in the pharynx and larynx. The score (minimum 4 - maximum 11) is obtained by the sum of the scores given to the site of the bolus, the amount and ability to control residue/bolus pooling, the latter assessed on the basis of cough, raclage, number of dry voluntary or reflex swallowing acts ( 5). Four judges evaluated 30 short films of pharyngeal transit of 10 solid (1/4 of a cracker), 11 creamy (1 tablespoon of jam) and 9 liquid (1 tablespoon of 5 cc of water coloured with methlyene blue, 1 ml in 100 ml) boluses in 23 subjects (10 M/13 F, age from 31 to 76 yrs, mean age 58.56±11.76 years) with different pathologies. The films were randomly distributed on two CDs, which differed in terms of the sequence of the films, and were given to judges (after an explanatory session) at time 0, 24 hours later (time 1) and after 7 days (time 2). The inter- and intra-rater reliability of the P-score was calculated using the intra-class correlation coefficient (ICC; 3,k). The possibility that consistency of boluses could affect the scoring of the films was considered. The ICC for site, amount, management and the P-score total was found to be, respectively, 0.999, 0.997, 1.00 and 0.999. Clinical evaluation of a criterion of severity of a swallowing disorder remains a crucial point in the management of patients with pathologies that predispose to complications. The P-score, derived from static and dynamic parameters, yielded a very high correlation among the scores attributed by the four judges during observations carried out at different times. Bolus consistencies did not affect the outcome of the test: the analysis of variance, performed to verify if the scores attributed by the four judges to the parameters selected, might be influenced by the different consistencies of the boluses, was not
Yao, Lihua
2012-01-01
Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…
Ossai, Peter Agbadobi Uloku
2016-01-01
This study examined the relationship between students' scores on Research Methods and statistics, and undergraduate project at the final year. The purpose was to find out whether students matched knowledge of research with project-writing skill. The study adopted an expost facto correlational design. Scores on Research Methods and Statistics for…
Analysis of WAIS-IV Index Score Scatter Using Significant Deviation from the Mean Index Score
Gregoire, Jacques; Coalson, Diane L.; Zhu, Jianjun
2011-01-01
The Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) does not include verbal IQ and performance IQ scores, as provided in previous editions of the scale; rather, this edition provides comparisons among four index scores, allowing analysis of an individual's WAIS-IV performance in more discrete domains of cognitive ability. To supplement…
Yao, Lihua
2012-01-01
Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…
Multidimensional Linking for Domain Scores and Overall Scores for Nonequivalent Groups
Yao, Lihua
2011-01-01
The No Child Left Behind Act requires state assessments to report not only overall scores but also domain scores. To see the information on students' overall achievement, progress, and detailed strengths and weaknesses, and thereby identify areas for improvement in educational quality, students' performances across years or across forms need to be…
Analysis of WAIS-IV Index Score Scatter Using Significant Deviation from the Mean Index Score
Gregoire, Jacques; Coalson, Diane L.; Zhu, Jianjun
2011-01-01
The Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) does not include verbal IQ and performance IQ scores, as provided in previous editions of the scale; rather, this edition provides comparisons among four index scores, allowing analysis of an individual's WAIS-IV performance in more discrete domains of cognitive ability. To supplement…
Strictly Proper Scoring Rules, Prediction, and Estimation
2005-11-01
Werner Ehm , Thomas Gerds, Eric P. Grimit, Eliezer Gurarie, Susanne Gschloessl, Leonhard Held, Peter J. Huber, Nicholas A. Johnson, Ian T. Jolliffe, Hans...International Journal of Man-Machine Studies, 10, 175–183. 36 Perlman, M. D. (1972). On the strong consistency of approximate maximum likelihood
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Lower bounds to the reliabilities of factor score estimators
Hessen, D.J.|info:eu-repo/dai/nl/256041717
2017-01-01
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score
Optimal cutting scores using a linear loss function
Linden, van der Wim J.; Mellenbergh, Gideon J.
1977-01-01
The situation is considered in which a total score on a test is used for classifying examinees into two categories: "accepted (with scores above a cutting score on the test) and "not accepted" (with scores below the cutting score). A value on the latent variable is fixed in advance; examinees above
Effects of using a scoring guide on essay scores: generalizability theory.
Kan, Adnan
2007-12-01
This study was conducted to test the effect of task level and item consistency when two conditions, with and without the assistance of a scoring guide, were used to score essays. The use of generalization theory was proposed as a framework for examining the effect of task variability and use of the scoring guide on achievement measures. Participants were 21 students in Grade 9 enrolled in regular Turkish language and literature classes. Of these students 11 were men and 10 were women. Ten teachers from the city were raters. In the past, raters of essays have given varied judgements of writing quality. Utilizing decision and generalizability theories, variation in scores was evaluated using a three-way (person x rater x task) analysis of variance design. The scoring guide was beneficial in reducing variability of evaluating grammar and reading comprehension but not as helpful when assessing knowledge of concepts.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Rivero-Martín, M J; Prieto-Martínez, S; García-Solano, M; Montilla-Pérez, M; Tena-Martín, E; Ballesteros-García, M M
2016-06-01
The aims of this study were to introduce a paediatric early warning score (PEWS) into our daily clinical practice, as well as to evaluate its ability to detect clinical deterioration in children admitted, and to train nursing staff to communicate the information and response effectively. An analysis was performed on the implementation of PEWS in the electronic health records of children (0-15 years) in our paediatric ward from February 2014 to September 2014. The maximum score was 6. Nursing staff reviewed scores >2, and if >3 medical and nursing staff reviewed it. Monitoring indicators: % of admissions with scoring; % of complete data capture; % of scores >3; % of scores >3 reviewed by medical staff, % of changes in treatment due to the warning system, and number of patients who needed Paediatric Intensive Care Unit (PICU) admission, or died without an increased warning score. The data were collected from all patients (931) admitted. The scale was measured 7,917 times, with 78.8% of them with complete data capture. Very few (1.9%) showed scores >3, and 14% of them with changes in clinical management (intensifying treatment or new diagnostic tests). One patient (scored 2) required PICU admission. There were no deaths. Parents or nursing staff concern was registered in 80% of cases. PEWS are useful to provide a standardised assessment of clinical status in the inpatient setting, using a unique scale and implementing data capture. Because of the lack of severe complications requiring PICU admission and deaths, we will have to use other data to evaluate these scales. Copyright © 2016 SECA. Published by Elsevier Espana. All rights reserved.
Prehospital score for acute disease: a community-based observational study in Japan
Fujiwara Hidekazu
2007-10-01
Full Text Available Abstract Background Ambulance usage in Japan has increased consistently because it is free under the national health insurance system. The introduction of refusal for ambulance transfer is being debated nationally. The purpose of the present study was to investigate the relationship between prehospital data and hospitalization outcome for acute disease patients, and to develop a simple prehospital evaluation tool using prehospital data for Japan's emergency medical service system. Methods The subjects were 9,160 consecutive acute disease patients aged ≥ 15 years who were transferred to hospital by Kishiwada City Fire Department ambulance between July 2004 and March 2006. The relationship between prehospital data (age, systolic blood pressure, pulse rate, respiration rate, level of consciousness, SpO2 level and ability to walk and outcome (hospitalization or non-hospitalization was analyzed using logistic regression models. The prehospital score component of each item of prehospital data was determined by beta coefficients. Eligible patients were scored retrospectively and the distribution of outcome was examined. For patients transported to the two main hospitals, outcome after hospitalization was also confirmed. Results A total of 8,330 (91% patients were retrospectively evaluated using a prehospital score with a maximum value of 14. The percentage of patients requiring hospitalization rose from 9% with score = 0 to 100% with score = 14. With a cut-off point score ≥ 2, the sensitivity, specificity, positive predictive value and negative predictive value were 97%, 16%, 39% and 89%, respectively. Among the 6,498 patients transported to the two main hospitals, there were no deaths at scores ≤ 1 and the proportion of non-hospitalization was over 90%. The proportion of deaths increased rapidly at scores ≥ 11. Conclusion The prehospital score could be a useful tool for deciding the refusal of ambulance transfer in Japan's emergency medical
Doppler ultrasound scoring to predict chemotherapeutic response in advanced breast cancer
Singh Tej B
2007-08-01
Full Text Available Abstract Background Doppler ultrasonography (US is increasingly being utilized as an imaging modality in breast cancer. It is used to study the vascular characteristics of the tumor. Neoadjuvant chemotherapy is the standard modality of treatment in locally advanced breast cancer. Histological examination remains the gold standard to assess the chemotherapy response. However, based on the color Doppler findings, a new scoring system that could predict histological response following chemotherapy is proposed. Methods Fifty cases of locally advanced infiltrating duct carcinoma of the breast were studied. The mean age of the patients was 44.5 years. All patients underwent clinical, Doppler and histopathological assessment followed by three cycles of CAF (Cyclophosphamide, Adriamycin and 5-Fluorouracil chemotherapy, repeat clinical and Doppler examination and surgery. The resected specimens were examined histopathologically and histological response was correlated with Doppler findings. The Doppler characteristics of the tumor were graded as 1–4 for 50% and complete disappearance of flow signals respectively. A cumulative score was calculated and compared with histopathological response. Results were analyzed using Chi square test, sensitivity, specificity, positive and negative predictive values. Results The maximum Doppler score according to the proposed scoring system was twelve and minimum three. Higher scores corresponded with a more favorable histopathological response. Twenty four patients had complete response to chemotherapy. Sixteen of these 24 patients (66.7% had a cumulative Doppler score more than nine. The sensitivity of cumulative score >5 was 91.7% and specificity was 38.5%. The area under the ROC curve of the cumulative score >9 was 0.72. Conclusion Doppler scoring can be accurately used to objectively predict the response to chemotherapy in patients with locally advanced breast cancer and it correlates well with histopathological
Doppler ultrasound scoring to predict chemotherapeutic response in advanced breast cancer
Kumar, Anand; Singh, Seema; Pradhan, Satyajit; Shukla, Ram C; Ansari, Mumtaz A; Singh, Tej B; Shyam, Rohit; Gupta, Saroj
2007-01-01
Background Doppler ultrasonography (US) is increasingly being utilized as an imaging modality in breast cancer. It is used to study the vascular characteristics of the tumor. Neoadjuvant chemotherapy is the standard modality of treatment in locally advanced breast cancer. Histological examination remains the gold standard to assess the chemotherapy response. However, based on the color Doppler findings, a new scoring system that could predict histological response following chemotherapy is proposed. Methods Fifty cases of locally advanced infiltrating duct carcinoma of the breast were studied. The mean age of the patients was 44.5 years. All patients underwent clinical, Doppler and histopathological assessment followed by three cycles of CAF (Cyclophosphamide, Adriamycin and 5-Fluorouracil) chemotherapy, repeat clinical and Doppler examination and surgery. The resected specimens were examined histopathologically and histological response was correlated with Doppler findings. The Doppler characteristics of the tumor were graded as 1–4 for 50% and complete disappearance of flow signals respectively. A cumulative score was calculated and compared with histopathological response. Results were analyzed using Chi square test, sensitivity, specificity, positive and negative predictive values. Results The maximum Doppler score according to the proposed scoring system was twelve and minimum three. Higher scores corresponded with a more favorable histopathological response. Twenty four patients had complete response to chemotherapy. Sixteen of these 24 patients (66.7%) had a cumulative Doppler score more than nine. The sensitivity of cumulative score >5 was 91.7% and specificity was 38.5%. The area under the ROC curve of the cumulative score >9 was 0.72. Conclusion Doppler scoring can be accurately used to objectively predict the response to chemotherapy in patients with locally advanced breast cancer and it correlates well with histopathological response. PMID:17725837
Gianfrancesco, M A; Balzer, L; Taylor, K E; Trupin, L; Nititham, J; Seldin, M F; Singer, A W; Criswell, L A; Barcellos, L F
2016-09-01
Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal-targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprising 41 established SLE variants and clinically important disease activity as measured by the validated Systemic Lupus Activity Questionnaire (SLAQ) in a multiethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (>4.0) for individuals with a high GRS compared with those with a low GRS across nine time points after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking and education, as well as time-dependent confounding of missing visits. Individual single-nucleotide polymorphism (SNP) analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across time points eight and nine after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors.
Distribution of Errors Reported by LOD2 LODStats Project
Hoekstra, R.J.; Groth, P.T.
DescriptionResults of discussion groups at the Linked Science Workshop 2013 held at the International Semantic Web Conference. (http://linkedscience.org/events/lisc2013/)Participants were asked to develop a matrices about how semantic web/linked data solutions can help address reproducbility/re* pro
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Lovell, D P
1999-06-01
Principal component analyses (PCA) have been carried out on the tissue scores from Draize eye irritation tests on the 55 formulations and chemical ingredients included in the COLIPA Eye Irritation Validation Study. A PCA was carried out on the tissue scores 24, 48 and 72 hours after instillation of the substances. The first Principal Component (PC I) explained 77% of the total variation in the tissues scores and showed a high negative correlation (r=-0.971) with the scores used to derive the Modified Maximum Average Score (MMAS). The second component (PC II) explained 7% of the total variability and contrasted corneal and iris damage with conjunctival damage as in a similar analysis carried out previously on the ECETOC databank. The third component (PCIII), while only explaining about 3% of the variability, identified individuals treated with formulations that were observed to have low corneal opacity but large corneal area scores. This may represent some particular manner of scoring at the laboratory administering the Draize test or a specific effect of some formulations. A further PCA was carried out on tissue scores from observations at 1hr to 21 days. PC I in this analysis explained 62% of the variability and there was a high negative correlation with the sum of all the tissue scores, while PC II explained 14% of the variability and contrasted damage up to 72 hours with damage after 72 hours. A number of formulations were identified with relatively low MMAS scores but tissue damage that persisted. PCA analysis is thus shown to be a powerful method for exploring complex datasets and for identification of outliers and subgroups. It has shown that the MMAS score captures most of the information on tissue scores in the first 72 hours following exposure, and it is unlikely to be of any advantage in using individual tissue scores for comparisons with alternative tests. The relationship of the classifications schemes used by three alternative methods in the COLIPA
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
Ligia J Dominguez
Full Text Available Strong evidence supports that dietary modifications may decrease incident type 2 diabetes mellitus (T2DM. Numerous diabetes risk models/scores have been developed, but most do not rely specifically on dietary variables or do not fully capture the overall dietary pattern. We prospectively assessed the association of a dietary-based diabetes-risk score (DDS, which integrates optimal food patterns, with the risk of developing T2DM in the SUN ("Seguimiento Universidad de Navarra" longitudinal study.We assessed 17,292 participants initially free of diabetes, followed-up for a mean of 9.2 years. A validated 136-item FFQ was administered at baseline. Taking into account previous literature, the DDS positively weighted vegetables, fruit, whole cereals, nuts, coffee, low-fat dairy, fiber, PUFA, and alcohol in moderate amounts; while it negatively weighted red meat, processed meats and sugar-sweetened beverages. Energy-adjusted quintiles of each item (with exception of moderate alcohol consumption that received either 0 or 5 points were used to build the DDS (maximum: 60 points. Incident T2DM was confirmed through additional detailed questionnaires and review of medical records of participants. We used Cox proportional hazards models adjusted for socio-demographic and anthropometric parameters, health-related habits, and clinical variables to estimate hazard ratios (HR of T2DM.We observed 143 T2DM confirmed cases during follow-up. Better baseline conformity with the DDS was associated with lower incidence of T2DM (multivariable-adjusted HR for intermediate (25-39 points vs. low (11-24 category 0.43 [95% confidence interval (CI 0.21, 0.89]; and for high (40-60 vs. low category 0.32 [95% CI: 0.14, 0.69]; p for linear trend: 0.019.The DDS, a simple score exclusively based on dietary components, showed a strong inverse association with incident T2DM. This score may be applicable in clinical practice to improve dietary habits of subjects at high risk of T2DM
The accuracy rate of Alvarado score, ultrasonography, and ...
2013-09-30
Sep 30, 2013 ... the patients have atypical clinical and laboratory findings. In ... recorded on the study form for data collection. The Alvarado score was calculated as described in the literature.[5] The Alvarado score is a 10-point scoring system.
Male-female differences in Scoliosis Research Society-30 scores in adolescent idiopathic scoliosis.
Roberts, David W; Savage, Jason W; Schwartz, Daniel G; Carreon, Leah Y; Sucato, Daniel J; Sanders, James O; Richards, Benjamin Stephens; Lenke, Lawrence G; Emans, John B; Parent, Stefan; Sarwark, John F
2011-01-01
Longitudinal cohort study. To compare functional outcomes between male and female patients before and after surgery for adolescent idiopathic scoliosis (AIS). There is no clear consensus in the existing literature with respect to sex differences in functional outcomes in the surgical treatment of AIS. A prospective, consecutive, multicenter database of patients who underwent surgical correction for adolescent idiopathic scoliosis was analyzed retrospectively. All patients completed Scoliosis Research Society-30 (SRS-30) questionnaires before and 2 years after surgery. Patients with previous spine surgery were excluded. Data were collected for sex, age, Risser grade, previous bracing history, maximum preoperative Cobb angle, curve correction at 2 years, and SRS-30 domain scores. Paired sample t tests were used to compare preoperative and postoperative scores within each sex. Independent sample t tests were used to compare scores between sexes. A P value of Self-image/appearance had the greatest relative improvement. Males had better self-image/appearance scores preoperatively, better pain scores at 2 years, and better mental health and total scores both preoperatively and at 2 years. Both males and females were similarly satisfied with surgery. Males treated with surgery for AIS report better preoperative self-image, less postoperative pain, and better mental health than females. These differences may be clinically significant. For both males and females, the most beneficial effect of surgery is improved self-image/appearance. Overall, the benefits of surgery for AIS are similar for both sexes.
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Application of decision trees in credit scoring
Ljiljanka Kvesić
2013-12-01
Full Text Available Banks are particularly exposed to credit risk due to the nature of their operations. Inadequate assessment of the borrower directly causes losses. The financial crisis the global economy is still going through has clearly shown what kind of problems can arise from an inadequate credit policy. Thus, the primary task of bank managers is to minimise credit risk. Credit scoring models were developed to support managers in assessing the creditworthiness of borrowers. This paper presents the decision tree based on exhaustive CHAID algorithm as one such model. Since the application of credit scoring models has not been adequately explored in the Croatian banking theory and practice, this paper aims not only to determine the characteristics that are crucial for predicting default, but also to highlight the importance of a quantitative approach in assessing the creditworthiness of borrowers.
Sleep scoring using artificial neural networks.
Ronzhina, Marina; Janoušek, Oto; Kolářová, Jana; Nováková, Marie; Honzík, Petr; Provazník, Ivo
2012-06-01
Rapid development of computer technologies leads to the intensive automation of many different processes traditionally performed by human experts. One of the spheres characterized by the introduction of new high intelligence technologies substituting analysis performed by humans is sleep scoring. This refers to the classification task and can be solved - next to other classification methods - by use of artificial neural networks (ANN). ANNs are parallel adaptive systems suitable for solving of non-linear problems. Using ANN for automatic sleep scoring is especially promising because of new ANN learning algorithms allowing faster classification without decreasing the performance. Both appropriate preparation of training data as well as selection of the ANN model make it possible to perform effective and correct recognizing of relevant sleep stages. Such an approach is highly topical, taking into consideration the fact that there is no automatic scorer utilizing ANN technology available at present.
Shower reconstruction in TUNKA-HiSCORE
Porelli, Andrea; Wischnewski, Ralf [DESY-Zeuthen, Platanenallee 6, 15738 Zeuthen (Germany)
2015-07-01
The Tunka-HiSCORE detector is a non-imaging wide-angle EAS cherenkov array designed as an alternative technology for gamma-ray physics above 10 TeV and to study spectrum and composition of cosmic rays above 100 TeV. An engineering array with nine stations (HiS-9) has been deployed in October 2013 on the site of the Tunka experiment in Russia. In November 2014, 20 more HiSCORE stations have been installed, covering a total array area of 0.24 square-km. We describe the detector setup, the role of precision time measurement, and give results from the innovative WhiteRabbit time synchronization technology. Results of air shower reconstruction are presented and compared with MC simulations, for both the HiS-9 and the HiS-29 detector arrays.
Right tail increasing dependence between scores
Fernández, M.; García, Jesús E.; González-López, V. A.; Romano, N.
2017-07-01
In this paper we investigate the behavior of the conditional probability Prob(U > u|V > v) of two records coming from students of an undergraduate course, where U is the score of calculus I, scaled in [0, 1] and V is the score of physics scaled in [0, 1], the physics subject is part of the admission test of the university. For purposes of comparison, we consider two different undergraduate courses, electrical engineering and mechanical engineering, during nine years, from 2003 to 2011. Through a Bayesian perspective we estimate Prob(U > u|V > v) year by year and course by course. We conclude that U is right tail increasing in V, in both courses and for all the years. Moreover, over these nine years, we observe different ranges of variability for the estimated probabilities of electrical engineering when compared to the estimated probabilities of mechanical engineering.
Soetomo score: score model in early identification of acute haemorrhagic stroke
Moh Hasan Machfoed
2016-06-01
Full Text Available Aim of the study: On financial or facility constraints of brain imaging, score model is used to predict the occurrence of acute haemorrhagic stroke. Accordingly, this study attempts to develop a new score model, called Soetomo score. Material and methods: The researchers performed a cross-sectional study of 176 acute stroke patients with onset of ≤24 hours who visited emergency unit of Dr. Soetomo Hospital from July 14th to December 14th, 2014. The diagnosis of haemorrhagic stroke was confirmed by head computed tomography scan. There were seven predictors of haemorrhagic stroke which were analysed by using bivariate and multivariate analyses. Furthermore, a multiple discriminant analysis resulted in an equation of Soetomo score model. The receiver operating characteristic procedure resulted in the values of area under curve and intersection point identifying haemorrhagic stroke. Afterward, the diagnostic test value was determined. Results: The equation of Soetomo score model was (3 × loss of consciousness + (3.5 × headache + (4 × vomiting − 4.5. Area under curve value of this score was 88.5% (95% confidence interval = 83.3–93.7%. In the Soetomo score model value of ≥−0.75, the score reached the sensitivity of 82.9%, specificity of 83%, positive predictive value of 78.8%, negative predictive value of 86.5%, positive likelihood ratio of 4.88, negative likelihood ratio of 0.21, false negative of 17.1%, false positive of 17%, and accuracy of 83%. Conclusions: The Soetomo score model value of ≥−0.75 can identify acute haemorrhagic stroke properly on the financial or facility constrains of brain imaging.
Malnutrition-Inflammation Score in Hemodialysis Patients
Behrooz Ebrahimzadehkor; Atamohammad Dorri; Abdolhamed Yapan-Gharavi
2014-01-01
Background: Malnutrition is a prevalent complication in patients on maintenance hemodialysis. Malnutrition-inflammation score (MIS), comprehensive nutritional assessment tool, as the reference standard was used to examine protein-energy wasting (PEW) and inflammation in hemodialysis patients. Materials and Methods: In this descriptive- analytical study, 48 hemodialysis patients were selected with random sampling. All the patients were interviewed and the MIS of the patients was recorded. T...
North Korean refugee doctors' preliminary examination scores
Sung Uk Chae
2016-12-01
Full Text Available Purpose Although there have been studies emphasizing the re-education of North Korean (NK doctors for post-unification of the Korean Peninsula, study on the content and scope of such re-education has yet to be conducted. Researchers intended to set the content and scope of re-education by a comparative analysis for the scores of the preliminary examination, which is comparable to the Korean Medical Licensing Examination (KMLE. Methods The scores of the first and second preliminary exams were analyzed by subject using the Wilcoxon signed rank test. The passing status of the group of NK doctors for KMLE in recent 3 years were investigated. The multiple-choice-question (MCQ items of which difficulty indexes of NK doctors were lower than those of South Korean (SK medical students by two times of the standard deviation of the scores of SK medical students were selected to investigate the relevant reasons. Results The average scores of nearly all subjects were improved in the second exam compared with the first exam. The passing rate of the group of NK doctors was 75%. The number of MCQ items of which difficulty indexes of NK doctors were lower than those of SK medical students was 51 (6.38%. NK doctors’ lack of understandings for Diagnostic Techniques and Procedures, Therapeutics, Prenatal Care, and Managed Care Programs was suggested as the possible reason. Conclusion The education of integrated courses focusing on Diagnostic Techniques and Procedures and Therapeutics, and apprenticeship-style training for clinical practice of core subjects are needed. Special lectures on the Preventive Medicine are likely to be required also.
MODELING CREDIT RISK THROUGH CREDIT SCORING
Adrian Cantemir CALIN; Oana Cristina POPOVICI
2014-01-01
Credit risk governs all financial transactions and it is defined as the risk of suffering a loss due to certain shifts in the credit quality of a counterpart. Credit risk literature gravitates around two main modeling approaches: the structural approach and the reduced form approach. In addition to these perspectives, credit risk assessment has been conducted through a series of techniques such as credit scoring models, which form the traditional approach. This paper examines the evolution of...
Credit Scoring Problem Based on Regression Analysis
Khassawneh, Bashar Suhil Jad Allah
2014-01-01
ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....
High throughput sample processing and automated scoring
Gunnar eBrunborg
2014-10-01
Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.
Modelling the predictive performance of credit scoring
Shi-Wei Shen
2013-02-01
Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
Scoring ordinal variables for constructing composite indicators
Marica Manisera
2013-05-01
Full Text Available In order to provide composite indicators of latent variables, for example of customer satisfaction, it is opportune to identify the structure of the latent variable, in terms of the assignment of items to the subscales defining the latent variable. Adopting the reflective model, the impact of four different methods of scoring ordinal variables on the identification of the true structure of latent variables is investigated. A simulation study composed of 5 steps is conducted: (1 simulation of population data with continuous variables measuring a two-dimensional latent variable with known structure; (2 draw of a number of random samples; (3 discretization of the continuous variables according to different distributional forms; (4 quantification of the ordinal variables obtained in step (3 according to different methods; (5 construction of composite indicators and verification of the correct assignment of variables to subscales by the multiple group method and the factor analysis. Results show that the considered scoring methods have similar performances in assigning items to subscales, and that, when the latent variable is multinormal, the distributional form of the observed ordinal variables is not determinant in suggesting the best scoring method to use.
Quality scores for 32,000 genomes
Land, Miriam L.; Hyatt, Doug; Jun, Se-Ran;
2014-01-01
public databases, and assigned quality scores for more than 30,000 prokaryotic genome sequences. Results Scores were assigned using four categories: the completeness of the assembly, the presence of full-length rRNA genes, tRNA composition and the presence of a set of 102 conserved genes in prokaryotes...... or not applicable. The scores highlighted organisms for which commonly used tools do not perform well. This information can be used to improve tools and to serve a broad group of users as more diverse organisms are sequenced. Unexpectedly, the comparison of predicted tRNAs across 15,000 high quality genomes showed......Background More than 80% of the microbial genomes in GenBank are of ‘draft’ quality (12,553 draft vs. 2,679 finished, as of October, 2013). We have examined all the microbial DNA sequences available for complete, draft, and Sequence Read Archive genomes in GenBank as well as three other major...
Validation of a new scoring system: Rapid assessment faecal incontinence score
Fernando; de; la; Portilla; Arantxa; Calero-Lillo; Rosa; M; Jiménez-Rodríguez; Maria; L; Reyes; Manuela; Segovia-González; María; Victoria; Maestre; Ana; M; García-Cabrera
2015-01-01
AIM: To implement a quick and simple test- rapid assessment faecal incontinence score(RAFIS) and show its reliability and validity.METHODS: From March 2008 through March 2010, we evaluated a total of 261 consecutive patients, including 53 patients with faecal incontinence. Demographic and comorbidity information was collected. In a single visit, patients were administered the RAFIS. The results obtained with the new score were compared with those of both Wexner score and faecal incontinence quality of life scale(FIQL) questionnaire. The patient withoutinfluence of the surgeon completed the test. The role of surgeon was explaining the meaning of each section and how he had to fill. Reliability of the RAFIS score was measured using intra-observer agreement and Cronbach’s alpha(internal consistency) coefficient. Multivariate analysis of the main components within the different scores was performed in order to determine whether all the scores measured the same factor and to conclude whether the information could be encompassed in a single factor. A sample size of 50 patients with faecal incontinence was estimated to be enough to detect a correlation of 0.55 or better at 5% level of significance with 80% power.RESULTS: We analysed the results obtained by 53 consecutive patients with faecal incontinence(median age 61.55 ± 12.49 years) in the three scoring systems. A total of 208 healthy volunteers(median age 58.41 ± 18.41 years) without faecal incontinence were included in the study as negative controls. Pearson’s correlation coefficient between "state" and "leaks" was excellent(r = 0.92, P < 0.005). Internal consistency in the comparison of "state" and "leaks" yielded also excellent correlation(Cronbach’s α = 0.93). Results in each score were compared using regression analysis and a correlation value of r = 0.98 was obtained with Wexner score. As regards FIQL questionnaire, the values of "r " for the different subscales of the questionnaire were: "lifestyle" r
PASI and PQOL-12 score in psoriasis : Is there any correlation?
Vikas Shankar
2011-01-01
Full Text Available Background: Psoriasis, a common papulo-squamous disorder of the skin, is universal in occurrence and may interfere with the quality of life adversely. Whether extent of the disease has any bearing upon the patients′ psychology has not much been studied in this part of the world. Aims: The objective of this hospital-based cross-sectional study was to assess the disease severity objectively using Psoriasis area and severity index (PASI score and the quality of life by Psoriasis quality-of-life questionnaire-12 (PQOL-12 and to draw correlation between them, if any. Materials and Methods PASI score denotes an objective method of scoring severity of psoriasis, reflecting not only the body surface area but also erythema, induration and scaling. The PQOL-12 represents a 12-item self-administered, disease-specific psychometric instrument created to specifically assess quality-of-life issues that are more important with psoriasis patients. PASI and PQOL-12 score were calculated in each patient for objectively assessing their disease severity and quality of life. Results: In total, 34 psoriasis patients (16 males, 18 females, of age ranging from 8 to 55 years, were studied. Maximum and minimum PASI scores were 0.8 and 32.8, respectively, whereas maximum and minimum PQOL-12 scores were 4 and 120, respectively. PASI and PQOL-12 values showed minimal positive correlation (r = +0.422. Conclusion: Disease severity of psoriasis had no direct reflection upon their quality of life. Limited psoriasis on visible area may also have greater impact on mental health.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Zaki Noah Hasan
2010-09-01
Full Text Available The Guillain-Barré syndrome (GBS is an acute post-infective autoimmune polyradiculoneuropathy, it is the commonest peripheral neuropathy causing respiratory failure. The aim of the study is to use the New Combined Scoring System in anticipating respiratory failure in order to perform elective measures without waiting for emergency situations to occur.
Patients and methods: Fifty patients with GBS were studied. Eight clinical parameters (including progression of patients to maximum weakness, respiratory rate/minute, breath holding
count (the number of digits the patient can count in holding his breath, presence of facial muscle weakness (unilateral or bilateral, presence of weakness of the bulbar muscle, weakness of the neck flexor muscle, and limbs weakness were assessed for each patient and a certain score was given to
each parameter, a designed combined score being constructed by taking into consideration all the above mentioned clinical parameters. Results and discussion: Fifteen patients (30% that were enrolled in our study developed respiratory failure. There was a highly significant statistical association between the development of respiratory failure and the lower grades of (bulbar muscle weakness score, breath holding count scores, neck muscle weakness score, lower limbs and upper limbs weakness score , respiratory rate score and the total sum score above 16 out of 30 (p-value=0.000 . No significant statistical difference was found regarding the progression to maximum weakness (p-value=0.675 and facial muscle weakness (p-value=0.482.
Conclusion: The patients who obtained a combined score (above 16’30 are at great risk of having respiratory failure.
A Comparison of Sleep Scored from Electroencephalography to Sleep Scored by Wrist Actigraphy
1993-09-01
actigraphy in insomnia. S . 15(4): 293-301. Kripke, D. F., Mullaney, D. J., Messin, S., and Wyborney, V. G. 1978. Wrist actigraphic measures of sleep and...Cl•anificatiort) (U) A Comparison of Sleep Scored from Electroencephalography to Sleep Scored by Wrist Actigraphy 12. PERSONAL AUTHOR(S) J.L. Caldwell...how much rest soldiers receive, various methods of monitoring activity have been used. One unobtrusive method is to use wrist activity monitors
MELD-XI Scores Correlate with Post-Fontan Hepatic Biopsy Fibrosis Scores.
Evans, William N; Acherman, Ruben J; Ciccolo, Michael L; Carrillo, Sergio A; Galindo, Alvaro; Rothman, Abraham; Winn, Brody J; Yumiaco, Noel S; Restrepo, Humberto
2016-10-01
We tested the hypothesis that MELD-XI values correlated with hepatic total fibrosis scores obtained in 70 predominately stable, post-Fontan patients that underwent elective cardiac catheterization. We found a statistically significant correlation between MELD-XI values and total fibrosis scores (p = 0.003). Thus, serial MELD-XI values may be an additional useful clinical parameter for follow-up care in post-Fontan patients.
Bonny, S P F; Pethick, D W; Legrand, I; Wierzbicki, J; Allen, P; Farmer, L J; Polkinghorne, R J; Hocquette, J-F; Gardner, G E
2016-04-01
Ossification score and animal age are both used as proxies for maturity-related collagen crosslinking and consequently decreases in beef tenderness. Ossification score is strongly influenced by the hormonal status of the animal and may therefore better reflect physiological maturity and consequently eating quality. As part of a broader cross-European study, local consumers scored 18 different muscle types cooked in three ways from 482 carcasses with ages ranging from 590 to 6135 days and ossification scores ranging from 110 to 590. The data were studied across three different maturity ranges; the complete range of maturities, a lesser range and a more mature range. The lesser maturity group consisted of carcasses having either an ossification score of 200 or less or an age of 987 days or less with the remainder in the greater maturity group. The three different maturity ranges were analysed separately with a linear mixed effects model. Across all the data, and for the greater maturity group, animal age had a greater magnitude of effect on eating quality than ossification score. This is likely due to a loss of sensitivity in mature carcasses where ossification approached and even reached the maximum value. In contrast, age had no relationship with eating quality for the lesser maturity group, leaving ossification score as the more appropriate measure. Therefore ossification score is more appropriate for most commercial beef carcasses, however it is inadequate for carcasses with greater maturity such as cull cows. Both measures may therefore be required in models to predict eating quality over populations with a wide range in maturity.
Cederbye, Camilla Natasha; Palshof, Jesper Andreas; Hansen, Tine Plato
2016-01-01
and the cytoplasmic signal. Intra-tumor heterogeneity in ABCG2 immunoreactivity was observed; however, statistical analyses of tissue microarrays (TMAs) and the corresponding whole sections from primary tumors of 57 metastatic CRC patients revealed a strong positive correlation between maximum TMA scores and whole...
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Scoring function to predict solubility mutagenesis
Deutsch Christopher
2010-10-01
Full Text Available Abstract Background Mutagenesis is commonly used to engineer proteins with desirable properties not present in the wild type (WT protein, such as increased or decreased stability, reactivity, or solubility. Experimentalists often have to choose a small subset of mutations from a large number of candidates to obtain the desired change, and computational techniques are invaluable to make the choices. While several such methods have been proposed to predict stability and reactivity mutagenesis, solubility has not received much attention. Results We use concepts from computational geometry to define a three body scoring function that predicts the change in protein solubility due to mutations. The scoring function captures both sequence and structure information. By exploring the literature, we have assembled a substantial database of 137 single- and multiple-point solubility mutations. Our database is the largest such collection with structural information known so far. We optimize the scoring function using linear programming (LP methods to derive its weights based on training. Starting with default values of 1, we find weights in the range [0,2] so that predictions of increase or decrease in solubility are optimized. We compare the LP method to the standard machine learning techniques of support vector machines (SVM and the Lasso. Using statistics for leave-one-out (LOO, 10-fold, and 3-fold cross validations (CV for training and prediction, we demonstrate that the LP method performs the best overall. For the LOOCV, the LP method has an overall accuracy of 81%. Availability Executables of programs, tables of weights, and datasets of mutants are available from the following web page: http://www.wsu.edu/~kbala/OptSolMut.html.
Best waveform score for diagnosing keratoconus
Allan Luz
2013-12-01
Full Text Available PURPOSE: To test whether corneal hysteresis (CH and corneal resistance factor (CRF can discriminate between keratoconus and normal eyes and to evaluate whether the averages of two consecutive measurements perform differently from the one with the best waveform score (WS for diagnosing keratoconus. METHODS: ORA measurements for one eye per individual were selected randomly from 53 normal patients and from 27 patients with keratoconus. Two groups were considered the average (CH-Avg, CRF-Avg and best waveform score (CH-WS, CRF-WS groups. The Mann-Whitney U-test was used to evaluate whether the variables had similar distributions in the Normal and Keratoconus groups. Receiver operating characteristics (ROC curves were calculated for each parameter to assess the efficacy for diagnosing keratoconus and the same obtained for each variable were compared pairwise using the Hanley-McNeil test. RESULTS: The CH-Avg, CRF-Avg, CH-WS and CRF-WS differed significantly between the normal and keratoconus groups (p<0.001. The areas under the ROC curve (AUROC for CH-Avg, CRF-Avg, CH-WS, and CRF-WS were 0.824, 0.873, 0.891, and 0.931, respectively. CH-WS and CRF-WS had significantly better AUROCs than CH-Avg and CRF-Avg, respectively (p=0.001 and 0.002. CONCLUSION: The analysis of the biomechanical properties of the cornea through the ORA method has proved to be an important aid in the diagnosis of keratoconus, regardless of the method used. The best waveform score (WS measurements were superior to the average of consecutive ORA measurements for diagnosing keratoconus.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Consider Propensity Scores to Compare Treatments
Lawrence M. Rudner
2006-11-01
Full Text Available The underlying question when comparing treatments is usually whether an individual would do better with treatment X than they would with treatment Y. But there are often practical and theoretical problems in giving people both treatments and comparing the data. This paper presents the use of propensity score matching as a methodology that can be used to compare the effectiveness of different treatments. The method is applied to answer two questions: (1 - Should examinees take a college admissions test near or a few years after graduation?- and (2 - Do accommodated students receive an unfair advantage?- Data from a large admission testing program is used.
Evaluation of Stress Scores Throughout Radiological Biopsies
Turkoglu
2016-06-01
Full Text Available Background Ultrasound-guided biopsy procedures are the most prominent methods that increase the trauma, stress and anxiety experienced by the patients. Objectives Our goal was to examine the level of stress in patients waiting for radiologic biopsy procedures and determine the stress and anxiety level arising from waiting for a biopsy procedure. Patients and Methods This prospective study included 35 female and 65 male patients who were admitted to the interventional radiology department of Kartal Dr. Lütfi Kirdar training and research hospital, Istanbul between the years 2014 and 2015. They filled out the adult resilience scale consisting of 33 items. Patients who were undergoing invasive radiologic interventions were grouped according to their phenotypic characteristics, education level (low, intermediate, and high, and biopsy features (including biopsy localization: neck, thorax, abdomen, and bone; and the number of procedures performed, 1 or more than 1. Before the biopsy, they were also asked to complete the depression-anxiety-stress scale (DASS 42, state-trait anxiety inventory scale (STAI-I, and continuous anxiety scale STAI-II. A total of 80 patients were biopsied (20 thyroid and parathyroid, 20 thorax, 20 liver and kidney, and 20 bone biopsies. The association between education levels (primary- secondary, high school and postgraduate and the number of biopsies (1 and more than 1 with the level of anxiety and stress were evaluated using the above-mentioned scales. Results Evaluation of sociodemographic and statistical characteristics of the patients showed that patients with biopsy in the neck region were moderately and severely depressed and stressed. In addition, the ratio of severe and extremely severe anxiety scores was significantly high. While the STAI-I and II scores were lined up as neck > bone > thorax > abdomen, STAI-I was higher in neck biopsies compared to thorax and abdomen biopsies. Regarding STAI-I and II scales, patients
Fingerprint Recognition Using Minutia Score Matching
J, Ravi; R, Venugopal K
2010-01-01
The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person's life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm.
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Validating the Interpretations and Uses of Test Scores
Kane, Michael T.
2013-01-01
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Conditional Standard Errors of Measurement for Composite Scores Using IRT
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
24 CFR 902.45 - Management operations scoring and thresholds.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Management operations scoring and... URBAN DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Indicator #3: Management Operations § 902.45 Management operations scoring and thresholds. (a) Scoring. The Management Operations Indicator score...
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Maximum length scale in density based topology optimization
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
On the Effect of Mortgages of Maximum Amount
YangZongping
2005-01-01
Since the enactment of the PRC Guarantee Law, mortgages of maximum amount has won wide application in a variety of business occupations and particularly in banking. Compared with the rich content of the 21clause statute on mortgages of maximum amount in Japan's Civil Law, the Chinese law has only four principled clauses. Its lack of operability plus its legislative gaps and defects has a severe impact on the positive effectiveness of the law. The core issue is the question of effectiveness. Because the principles stipulated in the Law run counter to the diversity of its actual practices,
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.