Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise.
Dessimoz, Christophe; Gil, Manuel
2008-06-23
The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML) on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA). Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity). Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.
A pairwise maximum entropy model accurately describes resting-state human brain networks.
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Richard R Stein
2015-07-01
Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Stein, Richard R; Marks, Debora S; Sander, Chris
2015-07-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Pragmatic Use of LOD - a Modular Approach
Treldal, Niels; Vestergaard, Flemming; Karlshøj, Jan
The concept of Level of Development (LOD) is a simple approach to specifying the requirements for the content of object-oriented models in a Building Information Modelling process. The concept has been implemented in many national and organization-specific variations and, in recent years, several...... and reliability of deliveries along with use-case-specific information requirements provides a pragmatic approach for a LOD concept. The proposed solution combines LOD requirement definitions with Information Delivery Manual-based use case requirements to match the specific needs identified for a LOD framework...
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
Rankings from Fuzzy Pairwise Comparisons
Broek, van den Pim; Noppen, Joost; Mohammadian, M.
2006-01-01
We propose a new method for deriving rankings from fuzzy pairwise comparisons. It is based on the observation that quantification of the uncertainty of the pairwise comparisons should be used to obtain a better crisp ranking, instead of a fuzzified version of the ranking obtained from crisp pairwise
Atmospheric and Oceanic Excitations to LOD Change on Quasi-biennial Time Scales
Li-Hua Ma; De-Chun Liao; Yan-Ben Han
2006-01-01
We use wavelet transform to study the time series of the Earth's rotation rate (length-of-day, LOD), the axial components of atmospheric angular momentum (AAM) and oceanic angular momentum (OAM) in the period 1962-2005, and discuss the quasi-biennial oscillations (QBO) of LOD change. The results show that the QBO of LOD change varies remarkably in amplitude and phase. It was weak before 1978, then became much stronger and reached maximum values during the strong El Nino events in around 1983 and 1997. Results from analyzing the axial AAM indicate that the QBO signals in axial AAM are extremely consistent with the QBOs of LOD change. During 1963-2003, the QBO variance in the axial AAM can explain about 99.0% of that of the LOD, in other words, all QBO signals of LOD change are almost excited by the axial AAM, while the weak QBO signals of the axial OAM are quite different from those of the LOD and the axial AAM in both time-dependent characteristics and magnitudes. The combined effects of the axial AAM and OAM can explain about 99.1% of the variance of QBO in LOD change during this period.
Pairwise harmonics for shape analysis
Zheng, Youyi
2013-07-01
This paper introduces a simple yet effective shape analysis mechanism for geometry processing. Unlike traditional shape analysis techniques which compute descriptors per surface point up to certain neighborhoods, we introduce a shape analysis framework in which the descriptors are based on pairs of surface points. Such a pairwise analysis approach leads to a new class of shape descriptors that are more global, discriminative, and can effectively capture the variations in the underlying geometry. Specifically, we introduce new shape descriptors based on the isocurves of harmonic functions whose global maximum and minimum occur at the point pair. We show that these shape descriptors can infer shape structures and consistently lead to simpler and more efficient algorithms than the state-of-the-art methods for three applications: intrinsic reflectional symmetry axis computation, matching shape extremities, and simultaneous surface segmentation and skeletonization. © 2012 IEEE.
LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance
Ellul, C.; Altenbuchner, J.
2013-09-01
The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.
3D Urban Visualization with LOD Techniques
无
2006-01-01
In 3D urban visualization, large data volumes related to buildings are a major factor that limits the delivery and browsing speed in a web-based computer system. This paper proposes a new approach based on the level of detail (LOD) technique advanced in 3D visualization in computer graphics. The key idea of LOD technique is to generalize details of object surfaces without losing details for delivery and displaying objects. This technique has been successfully used in visualizing one or a few multiple objects in films and other industries. However, applying the technique to 3D urban visualization requires an effective generalization method for urban buildings. Conventional two-dimensional (2D) generalization method at different scales provides a good generalization reference for 3D urban visualization. Yet, it is difficult to determine when and where to retrieve data for displaying buildings. To solve this problem, this paper defines an imaging scale point and image scale region for judging when and where to get the right data for visualization. The results show that the average response time of view transformations is much decreased.
Active Ranking using Pairwise Comparisons
Jamieson, Kevin G
2011-01-01
This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of $n$ objects can be identified by standard sorting methods using $n log_2 n$ pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a $d$-dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in $R^d$. We show that under this assumption the number of possible rankings grows like $n^{2d}$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than $d log n$ adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorith...
An Incremental LOD Method Based on Grid and Its Application in Distributed Terrain Visualization
MA Zhaoting; LI Chengming; PAN Mao
2005-01-01
Incremental LOD can be transmitted on the network as a stream, then users on the clients can easily catch the skeleton of terrain without downloading all the data from the server.Detailed information in a local part can be added gradually when users zoom it in without redundant data transmission in this procedure.To do this, an incremental LOD method is put forward according to the regular arrangement of grid.This method applies arbitrary sized grid terrains and is not restricted to square ones with a side measuring 2 k + 1 samples.Maximum height errors are recorded when the LOD is preprocessed and it can be visualized with the geometrical Mipmaps to reduce the screen error.
Pareto optimal pairwise sequence alignment.
DeRonne, Kevin W; Karypis, George
2013-01-01
Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.
Pairwise network information and nonlinear correlations
Martin, Elliot A.; Hlinka, Jaroslav; Davidsen, Jörn
2016-10-01
Reconstructing the structural connectivity between interacting units from observed activity is a challenge across many different disciplines. The fundamental first step is to establish whether or to what extent the interactions between the units can be considered pairwise and, thus, can be modeled as an interaction network with simple links corresponding to pairwise interactions. In principle, this can be determined by comparing the maximum entropy given the bivariate probability distributions to the true joint entropy. In many practical cases, this is not an option since the bivariate distributions needed may not be reliably estimated or the optimization is too computationally expensive. Here we present an approach that allows one to use mutual informations as a proxy for the bivariate probability distributions. This has the advantage of being less computationally expensive and easier to estimate. We achieve this by introducing a novel entropy maximization scheme that is based on conditioning on entropies and mutual informations. This renders our approach typically superior to other methods based on linear approximations. The advantages of the proposed method are documented using oscillator networks and a resting-state human brain network as generic relevant examples.
Enhanced LOD Concepts for Virtual 3d City Models
Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.
2013-09-01
Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.
Distribution of Errors Reported by LOD2 LODStats Project [Dataset
Hoekstra, R.; Groth, P.
2013-01-01
These files can be used to plot a distribution of error types based on the LOD2 LODStats analysis of linked data published through the datahub.io. The statistics show that many errors reported in these statistics are the result of HTTP problems (40x and 50x codes) unknown responses and connection do
LOD wars: The affected-sib-pair paradigm strikes back!
Farrall, M. [Wellcome Trust Centre for Human Genetics, Oxford (United Kingdom)
1997-03-01
In a recent letter, Greenberg et al. aired their concerns that the affected-sib-pair (ASP) approach was becoming excessively popular, owing to misconceptions and ignorance of the properties and limitations of both the ASP and the classic LOD-score approaches. As an enthusiast of using the ASP approach to map susceptibility genes for multifactorial traits, I would like to contribute a few comments and explanatory notes in defense of the ASP paradigm. 18 refs.
LOD First Estimates In 7406 SLR San Juan Argentina Station
Pacheco, A.; Podestá, R.; Yin, Z.; Adarvez, S.; Liu, W.; Zhao, L.; Alvis Rojas, H.; Actis, E.; Quinteros, J.; Alacoria, J.
2015-10-01
In this paper we show results derived from satellite observations at the San Juan SLR station of Felix Aguilar Astronomical Observatory (OAFA). The Satellite Laser Ranging (SLR) telescope was installed in early 2006, in accordance with an international cooperation agreement between the San Juan National University (UNSJ) and the Chinese Academy of Sciences (CAS). The SLR has been in successful operation since 2011 using NAOC SLR software for the data processing. This program was designed to calculate satellite orbits and station coordinates, however it was used in this work for the determination of LOD (Length Of Day) time series and Earth Rotation speed.
Binary Biometric Representation through Pairwise Polar Quantization
Chen, Chun; Veldhuis, Raymond; Tistarelli, M.; Nixon, M.
2009-01-01
Binary biometric representations have great significance for data compression and template protection. In this paper, we introduce pairwise polar quantization. Furthermore, aiming to optimize the discrimination between the genuine Hamming distance (GHD) and the imposter Hamming distance (IHD), we pr
Statistical physics of pairwise probability models
Yasser Roudi
2009-11-01
Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions.
Momeni, Babak; Xie, Li; Shou, Wenying
2017-03-28
Pairwise models are commonly used to describe many-species communities. In these models, an individual receives additive fitness effects from pairwise interactions with each species in the community ('additivity assumption'). All pairwise interactions are typically represented by a single equation where parameters reflect signs and strengths of fitness effects ('universality assumption'). Here, we show that a single equation fails to qualitatively capture diverse pairwise microbial interactions. We build mechanistic reference models for two microbial species engaging in commonly-found chemical-mediated interactions, and attempt to derive pairwise models. Different equations are appropriate depending on whether a mediator is consumable or reusable, whether an interaction is mediated by one or more mediators, and sometimes even on quantitative details of the community (e.g. relative fitness of the two species, initial conditions). Our results, combined with potential violation of the additivity assumption in many-species communities, suggest that pairwise modeling will often fail to predict microbial dynamics.
Relation Between Equatorial Oceanic Activities and LOD Changes
郑大伟; 陈刚
1994-01-01
The time series of the length of day (LOD) and the observational Pacific sea level during l962.0-1990.0 are used to study the relation between Earth rotation and equatorial oceanic activities.The results show that (i) the sea level is apparently rising at an average rate of about 1.75±.01mm/a during the past 30 years,(ii) there are large-scale eastward and westward water motions in the upper equatorial Pacific zone,which,according to the dynamical analysis of the angular momentum of the large-scale sea water motion in Pacific Ocean related to the Earth rotation axis accounts for about 30% of the change in ititerannual Eatlh rotation rate; (iii) the interannual changes in Earth rotation also cause changes in the distribution of the water mass in equatorial Pacific,and affect the formation of ENSO events.Based on these results,we give a new model for the interaction between equatorial ocean and Earth rotation.
PAIRWISE BLENDING OF HIGH LEVEL WASTE (HLW)
CERTA, P.J.
2006-02-22
The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of high level waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending.
Pairwise Document Classification for Relevance Feedback
2009-11-01
Pairwise Document Classification for Relevance Feedback Jonathan L. Elsas, Pinar Donmez, Jamie Callan, Jaime G. Carbonell Language Technologies...Collins-Thompson and J. Callan. Query expansion using random walk models. In CIKM ’05, page 711. ACM, 2005. [5] P. Donmez and J. Carbonell . Paired
Statistical physics of pairwise probability models
Roudi, Yasser; Aurell, Erik; Hertz, John
2009-01-01
(dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data...
Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions
Momeni, Babak; Xie, Li; Shou, Wenying
2017-01-01
Pairwise models are commonly used to describe many-species communities. In these models, an individual receives additive fitness effects from pairwise interactions with each species in the community ('additivity assumption'). All pairwise interactions are typically represented by a single equation where parameters reflect signs and strengths of fitness effects ('universality assumption'). Here, we show that a single equation fails to qualitatively capture diverse pairwise microbial interactions. We build mechanistic reference models for two microbial species engaging in commonly-found chemical-mediated interactions, and attempt to derive pairwise models. Different equations are appropriate depending on whether a mediator is consumable or reusable, whether an interaction is mediated by one or more mediators, and sometimes even on quantitative details of the community (e.g. relative fitness of the two species, initial conditions). Our results, combined with potential violation of the additivity assumption in many-species communities, suggest that pairwise modeling will often fail to predict microbial dynamics. DOI: http://dx.doi.org/10.7554/eLife.25051.001 PMID:28350295
Constraining cosmology with pairwise velocity estimator
Ma, Yin-Zhe; He, Ping
2015-01-01
In this paper, we develop a full statistical method for the pairwise velocity estimator previously proposed, and apply Cosmicflows-2 catalogue to this method to constrain cosmology. We first calculate the covariance matrix for line-of-sight velocities for a given catalogue, and then simulate the mock full-sky surveys from it, and then calculate the variance for the pairwise velocity field. By applying the $8315$ independent galaxy samples and compressed $5224$ group samples from Cosmicflows-2 catalogue to this statistical method, we find that the joint constraint on $\\Omega^{0.6}_{\\rm m}h$ and $\\sigma_{8}$ is completely consistent with the WMAP 9-year and Planck 2015 best-fitting cosmology. Currently, there is no evidence for the modified gravity models or any dynamic dark energy models from this practice, and the error-bars need to be reduced in order to provide any concrete evidence against/to support $\\Lambda$CDM cosmology.
Supplier Evaluation Process by Pairwise Comparisons
Arkadiusz Kawa
2015-01-01
Full Text Available We propose to assess suppliers by using consistency-driven pairwise comparisons for tangible and intangible criteria. The tangible criteria are simpler to compare (e.g., the price of a service is lower than that of another service with identical characteristics. Intangible criteria are more difficult to assess. The proposed model combines assessments of both types of criteria. The main contribution of this paper is the presentation of an extension framework for the selection of suppliers in a procurement process. The final weights are computed from relative pairwise comparisons. For the needs of the paper, surveys were conducted among Polish managers dealing with cooperation with suppliers in their enterprises. The Polish practice and restricted bidding are discussed, too.
Selection by pairwise comparisons with limited resources
Laureti, Paolo; Mathiesen, Joachim; Zhang, Yi-Cheng
2004-07-01
We analyze different methods of sorting and selecting a set of objects by their intrinsic value, via pairwise comparisons whose outcome is uncertain. After discussing the limits of repeated Round Robins, two new methods are presented: The ran-fil requires no previous knowledge on the set under consideration, yet displaying good performances even in the least favorable case. The min-ent method sets a benchmark for optimal dynamic tournaments design.
Automatic repair of CityGML LOD2 buildings using shrink-wrapping
Zhao, Z.; Ledoux, H.; Stoter, J.E.
2013-01-01
The LoD2 building models defined in CityGML are widely used in 3D city applications. The underlying geometry for such models is a GML solid (without interior shells), whose boundary should be a closed 2-manifold. However, this condition is often violated in practice because of the way LoD2 models ar
α-FUZZY PAIRWISE RETRACT OF L-VALUED PAIRWISE STRATIFICATION SPACES
M. H. GHANIM; F. S. MAHMOUD; M. A. FATH ALLA; M.A. HEBESHI
2004-01-01
The notion of a fuzzy retract was introduced by Rodabaugh (1981). The notion of a fuzzy pairwise retract was introduced in 2001. Some weak forms and some strong forms of α-continuous mappings were introduced in 1988 and 1997. The authors extend some of these forms to the L-fuzzy bitopological setting and construct various α-fuzzy pairwise retracts. The concept of weakly induced spaces in the case L = [0, 1] was introduced by Martin (1980). Lin and Luo (1987) generalized this notion to the case that L is an arbitrary F-lattice and introduced the notion of induced L-fts. Several results are obtained, especially, for L-valued pairwise stratification spaces.
Secret Key Generation for a Pairwise Independent Network Model
Nitinawarat, Sirin; Barg, Alexander; Narayan, Prakash; Reznik, Alex
2010-01-01
We consider secret key generation for a "pairwise independent network" model in which every pair of terminals observes correlated sources that are independent of sources observed by all other pairs of terminals. The terminals are then allowed to communicate publicly with all such communication being observed by all the terminals. The objective is to generate a secret key shared by a given subset of terminals at the largest rate possible, with the cooperation of any remaining terminals. Secrecy is required from an eavesdropper that has access to the public interterminal communication. A (single-letter) formula for secret key capacity brings out a natural connection between the problem of secret key generation and a combinatorial problem of maximal packing of Steiner trees in an associated multigraph. An explicit algorithm is proposed for secret key generation based on a maximal packing of Steiner trees in a multigraph; the corresponding maximum rate of Steiner tree packing is thus a lower bound for the secret ...
Pairwise Velocity Statistics of Dark Halos
Hai-Yan Zhang; Yi-Peng Jing
2004-01-01
We have accurately evaluated the halo pairwise velocity dispersion and the halo mean streaming velocity in the LCDM model (the flat ω0 = 0.3 model)using a set of high-resolution N-body simulations. Based on the simulation results,we have developed a model for the pairwise velocity dispersion of halos. Our model agrees with the simulation results over all scales we studied. We have also tested the model of Sheth et al. for the mean streaming motion of halos derived from the pair-conservation equation. We found that their model reproduces the simulation data very well on large scale, but under-predicts the streaming motion on scales r ＜ 10 h-1 Mpc. We have introduced an empirical relation to improve their model.These improved models are useful for predicting the redshift correlation functions and the redshift power spectrum of galaxies if the halo occupation number model,e.g. the cluster weighted model, is given for the galaxies.
Pairwise Trajectory Management (PTM): Concept Overview
Jones, Kenneth M.; Graff, Thomas J.; Chartrand, Ryan C.; Carreno, Victor; Kibler, Jennifer L.
2017-01-01
Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the precision of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the PTM minimum spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This paper provides an overview of the proposed application, description of a few key scenarios, high level discussion of expected air and ground equipment and procedure changes, overview of a potential flight crew human-machine interface that would support PTM operations and some initial PTM benefits results.
Stability of Pairwise Entanglement in Decoherence Environment
蔡建明
2004-01-01
@@ Consider the dynamics of a bipartite entangled system in the decoherence environment, we investigate the stability of pairwise entanglement under decoherence.We find that with the same initial entanglement, the lifetime of entanglement in pure states and some mixed states is the longest.We call these special entangled states as Decoherence Path States (DPS).Besides, we present simple analytic evolution equations of the entanglement in these states.The lifetimes can also be obtained easily.Furthermore, we also study the stability of the nearest neighbor entanglement in the ground state of an antiferromagnetic spin-1/2 ring.Coincidentally, the conclusion is that it is as stable as Decoherence Path States.Thus the nearest neighbor entanglement in the ground state is not maximized but it is the most stable.This interesting result links the energy and entanglement in a spin system from a new point of view.
Amplitude determinant coupled cluster with pairwise doubles
Zhao, Luning
2016-01-01
Recently developed pair coupled cluster doubles (pCCD) theory successfully reproduces doubly occupied configuration interaction (DOCI) with mean field cost. However, the projective nature of pCCD makes the method non-variational and thus hard to improve systematically. As a variational alternative, we explore the idea of coupled-cluster-like expansions based on amplitude determinants and develop a specific theory similar to pCCD based on determinants of pairwise doubles. The new ansatz admits a variational treatment through Monte Carlo methods while remaining size-consistent and, crucially, polynomial cost. In the dissociations of LiH, HF, H2O and N2, the method performs very similarly to pCCD and DOCI, suggesting that coupled-cluster-like ansatzes and variational evaluation may not be mutually exclusive.
Signatures of synchrony in pairwise count correlations
Tatjana Tchumatchenko
2010-04-01
Full Text Available Concerted neural activity can reflect specific features of sensory stimuli or behavioral tasks. Correlation coefficients and count correlations are frequently used to measure correlations between neurons, design synthetic spike trains and build population models. But are correlation coefficients always a reliable measure of input correlations? Here, we consider a stochastic model for the generation of correlated spike sequences which replicate neuronal pairwise correlations in many important aspects. We investigate under which conditions the correlation coefficients reflect the degree of input synchrony and when they can be used to build population models. We find that correlation coefficients can be a poor indicator of input synchrony for some cases of input correlations. In particular, count correlations computed for large time bins can vanish despite the presence of input correlations. These findings suggest that network models or potential coding schemes of neural population activity need to incorporate temporal properties of correlated inputs and take into consideration the regimes of firing rates and correlation strengths to ensure that their building blocks are an unambiguous measures of synchrony.
On the Capacity of Pairwise Collaborative Networks
Astaneh, Saeed A; Behroozi, Hamid
2008-01-01
We derive expressions for the achievable rate region of a collaborative coding scheme in a two-transmitter, two-receiver Pairwise Collaborative Network (PCN) where one transmitter and receiver pair, namely relay pair, assists the other pair, namely the source pair, by partially decoding and forwarding the transmitted message to the intended receiver. The relay pair provides such assistance while handling a private message. We assume that users can use the past channel outputs and can transmit and receive at the same time and in the same frequency band. In this collaborative scheme, the transmitter of the source pair splits its information into two independent parts. Ironically, the relay pair employs the decode and forward coding to assist the source pair in delivering a part of its message and re-encodes the decoded message along with private message, which is intended to the receiver of the relay pair, and broadcasts the results. The receiver of the relay pair decodes both messages, retrieves the private me...
Predicting community composition from pairwise interactions
Friedman, Jonathan; Higgins, Logan; Gore, Jeff
The ability to predict the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their predictive power has rarely been tested experimentally. To directly assess the predictive power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to predict the composition of communities composed of more than 2 species. The pairwise competitions resulted in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be predicted for communities composed solely of such pairs. Our results demonstrate the predictive ability and utility of simple phenomenology, which enables accurate predictions in the absence of mechanistic details.
Statistical pairwise interaction model of stock market
Bury, Thomas
2013-03-01
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
The putative old, nearby cluster Lod\\'{e}n 1 does not exist
Han, Eunkyu; Wright, Jason T
2016-01-01
Astronomers have access to precious few nearby, middle-aged benchmark star clusters. Within 500 pc, there are only NGC 752 and Ruprecht 147 (R147), at 1.5 and 3 Gyr respectively. The Database for Galactic Open Clusters (WEBDA) also lists Lod\\'{e}n 1 as a 2 Gyr cluster at a distance of 360 pc. If this is true, Lod\\'{e}n 1 could become a useful benchmark cluster. This work details our investigation of Lod\\'{e}n 1. We assembled archival astrometry (PPMXL) and photometry (2MASS, Tycho-2, APASS), and acquired medium resolution spectra for radial velocity measurements with the Robert Stobie Spectrograph (RSS) at the Southern African Large Telescope. We observed no sign of a cluster main-sequence turnoff or red giant branch amongst all stars in the field brighter than $J < 11$. Considering the 29 stars identified by L.O. Lod\\'{e}n and listed on SIMBAD as the members of Lod\\'{e}n 1, we found no compelling evidence of kinematic clustering in proper motion or radial velocity. Most of these candidates are A stars and...
Prediction of spatio-temporal patterns of neural activity from pairwise correlations
Marre, Olivier; Boustani, Sami El; Fregnac, Yves; Destexhe, Alain
2009-01-01
We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatio-temporal patterns significantly better than Ising models taking into acco...
An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement
Meissner, Martin; Decker, Reinhold; Scholz, Sören W.
2011-01-01
The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize...
Linked open data creating knowledge out of interlinked data : results of the LOD2 project
Bryl, Volha; Tramp, Sebastian
2014-01-01
Linked Open Data (LOD) is a pragmatic approach for realizing the Semantic Web vision of making the Web a global, distributed, semantics-based information system. This book presents an overview on the results of the research project “LOD2 -- Creating Knowledge out of Interlinked Data”. LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Program. Commencing in September 2010, this 4-year project comprised leading Linked Open Data research groups, companies, and service providers from across 11 European countries and South Korea. The aim of this project was to advance the state-of-the-art in research and development in four key areas relevant for Linked Data, namely 1. RDF data management; 2. the extraction, creation, and enrichment of structured RDF data; 3. the interlinking and fusion of Linked Data from different sources and 4. the authoring, exploration and visualization of Linked Data.
LOD score exclusion analyses for candidate disease susceptibility genes using case-parents design
DENG Hongwen; GAO Guimin
2006-01-01
The focus of almost all the association studies of candidate genes is to test for their importance. We recently developed a LOD score approach that can be used to test against the importance of candidate genes for complex diseases and quantitative traits in random samples. As a complementary method to regular association analyses, our LOD score approach is powerful but still affected by the population admixture, though it is more conservative. To control the confounding effect of population heterogeneity, we develop here a LOD score exclusion analysis using case-parents design, the basic design of the transmission disequilibrium test (TDT) approach that is immune to population admixture. In the analysis, specific genetic effects and inheritance models at candidate genes can be analyzed and if a LOD score is ≤ - 2.0, the locus can be excluded from having an effect larger than that specified. Simulations show that this approach has reasonable power to exclude a candidate gene having small genetic effects if it is not a disease susceptibility locus (DSL) with sample size often employed in TDT studies. Similar to association analyses with the TDT in nuclear families, our exclusion analyses are generally not affected by population admixture. The exclusion analyses may be implemented to rule out candidate genes with no or minor genetic effects as supplemental analyses for the TDT. The utility of the approach is illustrated with an application to test the importance of vitamin D receptor (VDR) gene underlying the differential risk to osteoporosis.
Improving the consistency of multi-LOD CityGML datasets by removing redundancy
Biljecki, F.; Ledoux, H.; Stoter, J.E.
2014-01-01
The CityGML standard enables the modelling of some topological relationships, and the representation in multiple levels of detail (LODs). However, both concepts are rarely utilised in reality. In this paper we investigate the linking of corresponding geometric features across multiple representation
Heuristic Reduction Algorithm Based on Pairwise Positive Region
QI Li; LIU Yu-shu
2007-01-01
To guarantee the optimal reduct set, a heuristic reduction algorithm is proposed, which considers the distinguishing information between the members of each pair decision classes. Firstly the pairwise positive region is defined, based on which the pairwise significance measure is calculated between the members of each pair classes. Finally the weighted pairwise significance of attribute is used as the attribute reduction criterion, which indicates the necessity of attributes very well. By introducing the noise tolerance factor, the new algorithm can tolerate noise to some extent. Experimental results show the advantages of our novel heuristic reduction algorithm over the traditional attribute dependency based algorithm.
Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes
Muhammad Hussain
2009-01-01
Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.
On asymptotic normality of pseudo likelihood estimates for pairwise interaction processes
Jensen, Jens Ledet; Künsch, Hans R.
1994-01-01
We consider point processes defined through a pairwise interaction potential and admitting a two-dimensional sufficient statistic. It is shown that the pseudo maximum likelihood estimate can be stochastically normed so that the limiting distribution is a standard normal distribution. This result...... is true irrespectively of the possible existence of phase transitions. The work here is an extension of the work Guyon and Künsch (1992, Lecture Notes in Statist., 74, Springer, New York) and is based on viewing a point process interchangeably as a lattice field. © 1994 The Institute of Statistical...
Pairwise interaction pattern in the weighted communication network
Xu, Xiao-Ke; Wu, Ye; Small, Michael
2012-01-01
Although recent studies show that both topological structures and human dynamics can strongly affect information spreading on social networks, the complicated interplay of the two significant factors has not yet been clearly described. In this work, we find a strong pairwise interaction based on analyzing the weighted network generated by the short message communication dataset within a Chinese tele-communication provider. The pairwise interaction bridges the network topological structure and human interaction dynamics, which can promote local information spreading between pairs of communication partners and in contrast can also suppress global information (e.g., rumor) cascade and spreading. In addition, the pairwise interaction is the basic pattern of group conversations and it can greatly reduce the waiting time of communication events between a pair of intimate friends. Our findings are also helpful for communication operators to design novel tariff strategies and optimize their communication services.
Grouped pair-wise comparison for subjective sound quality evaluation
MAO Dongxing; GAO Yali; YU Wuzhou; WANG Zuomin
2006-01-01
In subjective sound quality pair-wise comparison evaluation, test time grows with square of the number of sound stimulus. For this reason, subjective evaluation of large quantity of stimulus is difficult to carry out with pair-wise comparison method. A grouped pair-wise comparison (GPC) method is proposed to greatly decrease time and difficult of subjective comparison test, in which stimuli in the whole evaluation corpus are divided into N test groups,with reference-link stimuli configured in each group. Derived from subjective results of each group, final results of all stimuli are reconstructed, and their perceptual attributes of sound quality can be analyzed. With car interior noise as example, realization of subjective sound quality evaluation with GPC method is introduced. The results of GPC evaluation are in good agreement with those obtained from paired comparison and semantic differential methods.
Pairwise entanglement and local polarization of Heisenberg model
2008-01-01
The characteristics of pairwise entanglement and local polarization (LP) are dis-cussed by studying the ground state (states) of the Heisenberg XX model. The re-sults show that: the ground state (states) is (are) composed of the micro states with the minimal polarization (0 for even qubit and 1/2 for odd qubit); LP and the prob-ability of the micro state have an intimate relation, i.e. the stronger the LP, the smaller the probability, and the same LP corresponds to the same probability; the pairwise entanglement of the ground state is the biggest in all eigenvectors. It is found that the pairwise entanglement is decreased by the state degeneracy and the system size. The concurrence approaches a fixed value of about 0.3412 (for odd-qubit chain) or 0.3491 (for even-qubit chain) if the qubit number is large enough.
Dynamics of pairwise motions in the Cosmic Web
Hellwing, Wojciech A
2014-01-01
We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities $v_{12}$ as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements ...
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
A scalable pairwise class interaction framework for multidimensional classification
Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre;
2016-01-01
We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random fie...... of the framework and we test the behavior of the different scalability strategies proposed. A comparison with other state-of-the-art multidimensional classifiers show that the proposed framework either outperforms or is competitive with the tested straw-men methods....
ANIMATION STRATEGIES FOR SMOOTH TRANSFORMATIONS BETWEEN DISCRETE LODS OF 3D BUILDING MODELS
M. Kada
2016-06-01
Full Text Available The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.
Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models
Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias
2016-06-01
The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.
Highly sensitive lactate biosensor by engineering chitosan/PVI-Os/CNT/LOD network nanocomposite.
Cui, Xiaoqiang; Li, Chang Ming; Zang, Jianfeng; Yu, Shucong
2007-06-15
A novel chitosan/PVI-Os(polyvinylimidazole-Os)/CNT(carbon nanotube)/LOD (lactate oxidase) network nanocomposite was constructed on gold electrode for detection of lactate. The composite was nanoengineered by selected matched material components and optimized composition ratio to produce a superior lactate sensor. Positively charged chitosan and PVI-Os were used as the matrix and the mediator to immobilize the negatively charged LOD and to enhance the electron transfer, respectively. CNTs were introduced as the essential component in the composite for the network nanostructure. FESEM (field emission scan electron microscopy) and electrochemical characterization demonstrated that CNT behaved as a cross-linker to network PVI and chitosan due to its nanoscaled and negative charged nature. This significantly improved the conductivity, stability and electroactivity for detection of lactate. The standard deviation of the sensor without CNT in the composite was greatly reduced from 19.6 to 4.9% by addition of CNTs. With optimized conditions the sensitivity and detection limit of the lactate sensor was 19.7 microA mM(-1)cm(-2) and 5 microM, respectively. The sensitivity was remarkably improved in comparison to the newly reported values of 0.15-3.85 microA mM(-1)cm(-2). This novel nanoengineering approach for selecting matched components to form a network nanostructure could be extended to other enzyme biosensors, and to have broad potential applications in diagnostics, life science and food analysis.
Dube, M.P.; Kibar, Z.; Rouleau, G.A. [McGill Univ., Quebec (Canada)] [and others
1997-03-01
Hereditary spastic paraplegia (HSP) is a degenerative disorder of the motor system, defined by progressive weakness and spasticity of the lower limbs. HSP may be inherited as an autosomal dominant (AD), autosomal recessive, or an X-linked trait. AD HSP is genetically heterogeneous, and three loci have been identified so far: SPG3 maps to chromosome 14q, SPG4 to 2p, and SPG4a to 15q. We have undertaken linkage analysis with 21 uncomplicated AD families to the three AD HSP loci. We report significant linkage for three of our families to the SPG4 locus and exclude several families by multipoint linkage. We used linkage information from several different research teams to evaluate the statistical probability of linkage to the SPG4 locus for uncomplicated AD HSP families and established the critical LOD-score value necessary for confirmation of linkage to the SPG4 locus from Bayesian statistics. In addition, we calculated the empirical P-values for the LOD scores obtained with all families with computer simulation methods. Power to detect significant linkage, as well as type I error probabilities, were evaluated. This combined analytical approach permitted conclusive linkage analyses on small to medium-size families, under the restrictions of genetic heterogeneity. 19 refs., 1 fig., 1 tab.
CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds
Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol
The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.
Protein-protein interaction based on pairwise similarity
Zaki Nazar
2009-05-01
Full Text Available Abstract Background Protein-protein interaction (PPI is essential to most biological processes. Abnormal interactions may have implications in a number of neurological syndromes. Given that the association and dissociation of protein molecules is crucial, computational tools capable of effectively identifying PPI are desirable. In this paper, we propose a simple yet effective method to detect PPI based on pairwise similarity and using only the primary structure of the protein. The PPI based on Pairwise Similarity (PPI-PS method consists of a representation of each protein sequence by a vector of pairwise similarities against large subsequences of amino acids created by a shifting window which passes over concatenated protein training sequences. Each coordinate of this vector is typically the E-value of the Smith-Waterman score. These vectors are then used to compute the kernel matrix which will be exploited in conjunction with support vector machines. Results To assess the ability of the proposed method to recognize the difference between "interacted" and "non-interacted" proteins pairs, we applied it on different datasets from the available yeast saccharomyces cerevisiae protein interaction. The proposed method achieved reasonable improvement over the existing state-of-the-art methods for PPI prediction. Conclusion Pairwise similarity score provides a relevant measure of similarity between protein sequences. This similarity incorporates biological knowledge about proteins and it is extremely powerful when combined with support vector machine to predict PPI.
On the extraction of weights from pairwise comparison matrices
Dijkstra, Theo K.
2013-01-01
We study properties of weight extraction methods for pairwise comparison matrices that minimize suitable measures of inconsistency, 'average error gravity' measures, including one that leads to the geometric row means. The measures share essential global properties with the AHP inconsistency measure
Pairwise comparison versus Likert scale for biomedical image assessment.
Phelps, Andrew S; Naeger, David M; Courtier, Jesse L; Lambert, Jack W; Marcovici, Peter A; Villanueva-Meyer, Javier E; MacKenzie, John D
2015-01-01
Biomedical imaging research relies heavily on the subjective and semi-quantitative reader analysis of images. Current methods are limited by interreader variability and fixed upper and lower limits. The purpose of this study was to compare the performance of two assessment methods, pairwise comparison and Likert scale, for improved analysis of biomedical images. A set of 10 images with varying degrees of image sharpness was created by digitally blurring a normal clinical chest radiograph. Readers assessed the degree of image sharpness using two different methods: pairwise comparison and a 10-point Likert scale. Reader agreement with actual chest radiograph sharpness was calculated for each method by use of the Lin concordance correlation coefficient (CCC). Reader accuracy was highest for pairwise comparison (CCC, 1.0) and ranked Likert (CCC, 0.99) scores and lowest for nonranked Likert scores (CCC, 0.83). Accuracy improved slightly when readers repeated their assessments (CCC, 0.87) or had reference images available (CCC, 0.91). Pairwise comparison and ranked Likert scores yield more accurate reader assessments than nonranked Likert scores.
Modeling Expressed Emotions in Music using Pairwise Comparisons
Madsen, Jens; Nielsen, Jens Brehm; Jensen, Bjørn Sand
2012-01-01
We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we...
The Waring rank of the sum of pairwise coprime monomials
Carlini, Enrico; Geramita, Anthony V
2011-01-01
In this paper we compute the Waring rank of any polynomial of the form F=M_1+...+M_r where the M_i are pairwise coprime monomials, i.e., GCD(M_i,M_j)=1. In particular, we show that rk(F)=rk(M_1)+...+rk(M_r).
Simulations of the Pairwise Kinematic Sunyaev-Zeldovich Signal
Flender, Samuel; Finkel, Hal; Habib, Salman; Heitmann, Katrin; Holder, Gilbert
2015-01-01
The pairwise kinematic Sunyaev-Zel'dovich (kSZ) signal from galaxy clusters is a probe of their line-of-sight momenta, and thus a potentially valuable source of cosmological information. In addition to the momenta, the amplitude of the measured signal depends on the properties of the intra-cluster gas and observational limitations such as errors in determining cluster centers and redshifts. In this work we simulate the pairwise kSZ signal of clusters at z<1, using the output from a cosmological N-body simulation and including the properties of the intra-cluster gas via a model that can be varied in post-processing. We find that modifications to the gas profile due to star formation and feedback reduce the pairwise kSZ amplitude of clusters by ~50%, relative to the naive `gas traces mass' assumption. We further demonstrate that offsets between the true and observer-selected centers of clusters can reduce the overall amplitude of the pairwise kSZ signal by up to 10%, while errors in the redshifts can lead to...
Modeling Expressed Emotions in Music using Pairwise Comparisons
Madsen, Jens; Nielsen, Jens Brehm; Jensen, Bjørn Sand
2012-01-01
We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we...
Dynamics of pairwise motions in the Cosmic Web
Hellwing, Wojciech A.
2016-10-01
We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.
Pairwise structure alignment specifically tuned for surface pockets and interaction interfaces
Cui, Xuefeng
2015-09-09
(PROSTA) family of pairwise structure alignment methods [1, 2] that address the fragmentation issue of pockets and interfaces, and automatically align interfaces between any types of biological complexes. Our PROSTA structure alignment methods have two critical advantages comparing to existing structure alignment methods. First, our methods are completely sequence order independent, which is critical to the success of pairwise pocket and interface structure alignments. This is achieved by introducing contact groups that are not limited to backbone fragments, and by employing a maximum weighted bipartite matching solver from the beginning of the alignment process. In addition, our methods incorporate similarities of sequentially and structurally remote residues that potentially model the topology of the global structure. Comparing to existing methods that focus on local structure or whole sequence similarities, topological similarities are more reliable to find near-optimal structure alignments in the initial alignment state. As a result, a significant number of similar pockets and interfaces are newly discovered, and literatures also support that similar functions are shared between biological complexes in our cases studies. The PROSTA web-server and source codes are publicly available at "http://www.cbrc.kaust.edu.sa/prosta/".
Singularity Processing Method of Microstrip Line Edge Based on LOD-FDTD
Lei Li
2014-01-01
Full Text Available In order to improve the performance of the accuracy and efficiency for analyzing the microstrip structure, a singularity processing method is proposed theoretically and experimentally based on the fundamental locally one-dimensional finite difference time domain (LOD-FDTD with second-order temporal accuracy (denoted as FLOD2-FDTD. The proposed method can highly improve the performance of the FLOD2-FDTD even when the conductor is embedded into more than half of the cell by the coordinate transformation. The experimental results showed that the proposed method can achieve higher accuracy when the time step size is less than or equal to 5 times of that the Courant-Friedrich-Levy (CFL condition allowed. In comparison with the previously reported methods, the proposed method for calculating electromagnetic field near microstrip line edge not only improves the efficiency, but also can provide a higher accuracy.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
A predictive model of music preference using pairwise comparisons
Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan
2012-01-01
Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...... function and evaluate the predictive performance on a novel dataset. In a recommendation style setting we obtain a leave-one-out accuracy of 74% compared to 50% with random predictions, showing potential for further refinement and evaluation....
Marc Santolini
Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting
Pairwise Comparison Versus Likert Scale for Biomedical Image Assessment
Phelps, AS; Naeger, DM; Courtier, JL; Lambert, JW; Marcovici, PA; Villanueva-Meyer, JE; MacKenzie, JD
2015-01-01
© American Roentgen Ray Society. OBJECTIVE Biomedical imaging research relies heavily on the subjective and semiquantitative reader analysis of images. Current methods are limited hy interreader variability and fixed upper and lower limits. The purpose of this study was to compare the performance of two assessment methods, pairwise comparison and Likert scale, for improved analysis of biomedical images. MATERIALS AND METHODS. A set of 10 images with varying degrees of image sharpness was crea...
Pairwise KLT-Based Compression for Multispectral Images
Nian, Yongjian; Liu, Yu; Ye, Zhen
2016-12-01
This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.
Bisheng Yang
2016-12-01
Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.
Aggregation of LoD 1 building models as an optimization problem
Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.
3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.
3D Building Modeling in LoD2 Using the CityGML Standard
Preka, D.; Doulamis, A.
2016-10-01
Over the last decade, scientific research has been increasingly focused on the third dimension in all fields and especially in sciences related to geographic information, the visualization of natural phenomena and the visualization of the complex urban reality. The field of 3D visualization has achieved rapid development and dynamic progress, especially in urban applications, while the technical restrictions on the use of 3D information tend to subside due to advancements in technology. A variety of 3D modeling techniques and standards has already been developed, as they gain more traction in a wide range of applications. Such a modern standard is the CityGML, which is open and allows for sharing and exchanging of 3D city models. Within the scope of this study, key issues for the 3D modeling of spatial objects and cities are considered and specifically the key elements and abilities of CityGML standard, which is used in order to produce a 3D model of 14 buildings that constitute a block at the municipality of Kaisariani, Athens, in Level of Detail 2 (LoD2), as well as the corresponding relational database. The proposed tool is based upon the 3DCityDB package in tandem with a geospatial database (PostgreSQL w/ PostGIS 2.0 extension). The latter allows for execution of complex queries regarding the spatial distribution of data. The system is implemented in order to facilitate a real-life scenario in a suburb of Athens.
Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.
Petrovic, Vid; Fallon, James; Kuester, Falko
2007-01-01
Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.
GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY
F. Biljecki
2016-09-01
Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.
Generation of Multi-Lod 3d City Models in Citygml with the Procedural Modelling Engine RANDOM3DCITY
Biljecki, F.; Ledoux, H.; Stoter, J.
2016-09-01
The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is - as we discuss in this paper - well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at github.com/tudelft3d/Random3Dcity"target="_blank">http://github.com/tudelft3d/Random3Dcity.
Revision of Begomovirus taxonomy based on pairwise sequence comparisons
Brown, Judith K.
2015-04-18
Viruses of the genus Begomovirus (family Geminiviridae) are emergent pathogens of crops throughout the tropical and subtropical regions of the world. By virtue of having a small DNA genome that is easily cloned, and due to the recent innovations in cloning and low-cost sequencing, there has been a dramatic increase in the number of available begomovirus genome sequences. Even so, most of the available sequences have been obtained from cultivated plants and are likely a small and phylogenetically unrepresentative sample of begomovirus diversity, a factor constraining taxonomic decisions such as the establishment of operationally useful species demarcation criteria. In addition, problems in assigning new viruses to established species have highlighted shortcomings in the previously recommended mechanism of species demarcation. Based on the analysis of 3,123 full-length begomovirus genome (or DNA-A component) sequences available in public databases as of December 2012, a set of revised guidelines for the classification and nomenclature of begomoviruses are proposed. The guidelines primarily consider a) genus-level biological characteristics and b) results obtained using a standardized classification tool, Sequence Demarcation Tool, which performs pairwise sequence alignments and identity calculations. These guidelines are consistent with the recently published recommendations for the genera Mastrevirus and Curtovirus of the family Geminiviridae. Genome-wide pairwise identities of 91 % and 94 % are proposed as the demarcation threshold for begomoviruses belonging to different species and strains, respectively. Procedures and guidelines are outlined for resolving conflicts that may arise when assigning species and strains to categories wherever the pairwise identity falls on or very near the demarcation threshold value.
Dunkl operator, integrability, and pairwise scattering in rational Calogero model
Karakhanyan, David
2017-05-01
The integrability of the Calogero model can be expressed as zero curvature condition using Dunkl operators. The corresponding flat connections are non-local gauge transformations, which map the Calogero wave functions to symmetrized wave functions of the set of N free particles, i.e. it relates the corresponding scattering matrices to each other. The integrability of the Calogero model implies that any k-particle scattering is reduced to successive pairwise scatterings. The consistency condition of this requirement is expressed by the analog of the Yang-Baxter relation.
Introducing a Pairwise Comparison Scale for UX Evaluations with Preschoolers
Zaman, Bieke
This paper describes the development and validation of a pairwise comparison scale for user experience (UX) evaluations with preschoolers. More particularly, the dimensionality, reliability and validity of the scale are discussed. The results of three experiments among almost 170 preschoolers show that user experience cannot be measured quantitatively as a multi-dimensional construct. In contrast, preschoolers’ UX should be measured directly as a one-dimensional higher order construct. This one-dimensional scale encompassing five general items proved to be internally consistent and valid providing evidence of a solid theory-based instrument to measure UX with preschoolers.
Pairwise Quantum Correlations for Superpositions of Dicke States
席政军; 熊恒娜; 李永明; 王晓光
2012-01-01
Pairwise correlation is really an important property for multi-qubit states.For the two-qubit X states extracted from Dicke states and their superposition states,we obtain a compact expression of the quantum discord by numerical check.We then apply the expression to discuss the quantum correlation of the reduced two-qubit states of Dicke states and their superpositions,and the results are compared with those obtained by entanglement of formation,which is a quantum entanglement measure.
Efficient preference learning with pairwise continuous observations and Gaussian Processes
Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan
2011-01-01
Human preferences can effectively be elicited using pairwise comparisons and in this paper current state-of-the-art based on binary decisions is extended by a new paradigm which allows subjects to convey their degree of preference as a continuous but bounded response. For this purpose, a novel...... the predictive performance under various noise conditions on a synthetic dataset. It is demonstrated that the learning rate of the novel paradigm is not only faster under ideal conditions, where continuous responses are naturally more informative than binary decisions, but also under adverse conditions where...
Pairwise velocities in the "Running FLRW" cosmological model
Bibiano, Antonio; Croton, Darren J.
2017-01-01
We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the "Running Friedmann-Lemaître-Robertson-Walker" (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends ΛCDM with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various Coupled Dark Energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM which could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.
Hybrid pairwise likelihood analysis of animal behavior experiments.
Cattelan, Manuela; Varin, Cristiano
2013-12-01
The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.
Pairwise velocities in the Halo Model: Luminosity and Scale Dependence
Slosar, A; Tasitsiomi, A; Slosar, Anze; Seljak, Uros; Tasitsiomi, Argyro
2006-01-01
We investigate the properties of the pairwise velocity dispersion as a function of galaxy luminosity in the context of a halo model. We derive the distribution of velocities of pairs at a given separation taking into account both one-halo and two-halo contributions. We show that pairwise velocity distribution in real space is a complicated mixture of host-satellite, satellite-satellite and two-halo pairs. The peak value is reached at around 1 Mpc/h and does not reflect the velocity dispersion of a typical halo hosting these galaxies, but is instead dominated by the satellite-satellite pairs in high mass clusters. This is true even for cross-correlations between bins separated in luminosity. As a consequence the velocity dispersion at a given separation can decrease with luminosity, even if the underlying typical halo host mass is increasing, in agreement with some recent observations. We compare our findings to numerical simulations and find a good agreement. Numerical simulations also suggest a luminosity de...
Extension of Pairwise Broadcast Clock Synchronization for Multicluster Sensor Networks
Bruce W. Suter
2008-01-01
Full Text Available Time synchronization is crucial for wireless sensor networks (WSNs in performing a number of fundamental operations such as data coordination, power management, security, and localization. The Pairwise Broadcast Synchronization (PBS protocol was recently proposed to minimize the number of timing messages required for global network synchronization, which enables the design of highly energy-efficient WSNs. However, PBS requires all nodes in the network to lie within the communication ranges of two leader nodes, a condition which might not be available in some applications. This paper proposes an extension of PBS to the more general class of sensor networks. Based on the hierarchical structure of the network, an energy-efficient pair selection algorithm is proposed to select the best pairwise synchronization sequence to reduce the overall energy consumption. It is shown that in a multicluster networking environment, PBS requires a far less number of timing messages than other well-known synchronization protocols and incurs no loss in synchronization accuracy. Moreover, the proposed scheme presents significant energy savings for densely deployed WSNs.
A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods
Bijmolt, T.H.A.; Wedel, M.
1996-01-01
We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the
Benchmarking the performance of pairwise homogenization of surface temperatures in the United States
Menne, M. J.; Williams, C. N.; Thorne, P. W.
2013-09-01
Changes in the circumstances behind in situ temperature measurements often lead to shifts in individual station records that can lead to over or under-estimates of the local and regional temperature trends. Since these shifts are comparable in magnitude to climate change signals, homogeneity "corrections" are necessary to make the records suitable for climate analysis. To quantify the effectiveness of surface temperature homogenization in the United States, a randomized perturbed ensemble of the pairwise homogenization algorithm was run against a suite of benchmark analogs to real monthly temperature data from the United States Cooperative Observer Program, which includes the subset of stations known as the United States Historical Climatology Network (USHCN). Results indicate that all randomized versions of the algorithm consistently produce homogenized data closer to the true climate signal in the presence of widespread systematic shifts in the data. When applied to the real-world observations, the randomized ensemble reinforces previous understanding that the two dominant sources of shifts in the U.S. temperature records are caused by changes to time of observation (spurious cooling in minimum and maximum) and conversion to electronic resistance thermometers (spurious cooling in maximum and warming in minimum). Trend bounds defined by the ensemble output indicate that maximum temperature trends are positive for the past 30, 50 and 100 years, and that these maximums contain pervasive negative shifts that cause the unhomogenized (raw) trends to fall below the lowest of the ensemble of homogenized trends. Moreover, because the residual impact of undetected/uncorrected shifts in the homogenized analogs is one-tailed when the imposed shifts have a positive or negative sign preference, it is likely that maximum temperature trends have been underestimated in the real-world homogenized temperature data from the USHCN. Trends for minimum temperature are also positive
Beyond pairwise strategy updating in the prisoner's dilemma game
Wang, Xiaofeng; Liu, Yongkui; Chen, Xiaojie; Wang, Long; 10.1038/srep00740
2012-01-01
In spatial games players typically alter their strategy by imitating the most successful or one randomly selected neighbor. Since a single neighbor is taken as reference, the information stemming from other neighbors is neglected, which begets the consideration of alternative, possibly more realistic approaches. Here we show that strategy changes inspired not only by the performance of individual neighbors but rather by entire neighborhoods introduce a qualitatively different evolutionary dynamics that is able to support the stable existence of very small cooperative clusters. This leads to phase diagrams that differ significantly from those obtained by means of pairwise strategy updating. In particular, the survivability of cooperators is possible even by high temptations to defect and over a much wider uncertainty range. We support the simulation results by means of pair approximations and analysis of spatial patterns, which jointly highlight the importance of local information for the resolution of social ...
Efficient preference learning with pairwise continuous observations and Gaussian Processes
Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan
2011-01-01
Human preferences can effectively be elicited using pairwise comparisons and in this paper current state-of-the-art based on binary decisions is extended by a new paradigm which allows subjects to convey their degree of preference as a continuous but bounded response. For this purpose, a novel...... Betatype likelihood is proposed and applied in a Bayesian regression framework using Gaussian Process priors. Posterior estimation and inference is performed using a Laplace approximation. The potential of the paradigm is demonstrated and discussed in terms of learning rates and robustness by evaluating...... the predictive performance under various noise conditions on a synthetic dataset. It is demonstrated that the learning rate of the novel paradigm is not only faster under ideal conditions, where continuous responses are naturally more informative than binary decisions, but also under adverse conditions where...
Entanglement properties in a system of a pairwise entangled state
Liu Tang-Kun; Cheng Wei-Wen; Shan Chuan-Jia; Gao Yun-Feng; Wang Ji-Suo
2007-01-01
Based on the quantum information theory, this paper has investigated the entanglement properties of a system which is composed of the two entangled two-level atoms interacting with the two-mode entangled coherent fields. The influences of the strength of light field and the two parameters of entanglement between the two-mode fields on the field entropy and on the negative eigenvalues of partial transposition of density matrix are discussed by using numerical calculations. The result shows that the entanglement properties in a system of a pairwise entangled states can be controlled by appropriately choosing the two parameters of entanglement between the two-mode entangled coherent fields and the strength of two light fields respectively.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip
Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values. PMID:26421312
Predictive Modeling of Expressed Emotions in Music Using Pairwise Comparisons
Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan
2013-01-01
We introduce a two-alternative forced-choice (2AFC) experimental paradigm to quantify expressed emotions in music using the arousal and valence (AV) dimensions. A wide range of well-known audio features are investigated for predicting the expressed emotions in music using learning curves...... and essential baselines. We furthermore investigate the scalability issues of using 2AFC in quantifying emotions expressed in music on large-scale music databases. The possibility of dividing the annotation task between multiple individuals, while pooling individuals’ comparisons is investigated by looking...... at the subjective differences of ranking emotion in the AV space. We find this to be problematic due to the large variation in subjects’ rankings of excerpts. Finally, solving scalability issues by reducing the number of pairwise comparisons is analyzed. We compare two active learning schemes to selecting...
A human platelet calcium calculator trained by pairwise agonist scanning.
Mei Yan Lee
2015-02-01
Full Text Available Since platelet intracellular calcium mobilization [Ca(t]i controls granule release, cyclooxygenase-1 and integrin activation, and phosphatidylserine exposure, blood clotting simulations require prediction of platelet [Ca(t]i in response to combinatorial agonists. Pairwise Agonist Scanning (PAS deployed all single and pairwise combinations of six agonists (ADP, convulxin, thrombin, U46619, iloprost and GSNO used at 0.1, 1, and 10xEC50; 154 conditions including a null condition to stimulate platelet P2Y1/P2Y12 GPVI, PAR1/PAR4, TP, IP receptors, and guanylate cyclase, respectively, in Factor Xa-inhibited (250 nM apixaban, diluted platelet rich plasma that had been loaded with the calcium dye Fluo-4 NW. PAS of 10 healthy donors provided [Ca(t]i data for training 10 neural networks (NN, 2-layer/12-nodes per donor. Trinary stimulations were then conducted at all 0.1x and 1xEC50 doses (160 conditions as was a sampling of 45 higher ordered combinations (four to six agonists. The NN-ensemble average was a calcium calculator that accurately predicted [Ca (t]i beyond the single and binary training set for trinary stimulations (R = 0.924. The 160 trinary synergy scores, a normalized metric of signaling crosstalk, were also well predicted (R = 0.850 as were the calcium dynamics (R = 0.871 and high-dimensional synergy scores (R = 0.695 for the 45 higher ordered conditions. The calculator even predicted sequential addition experiments (n = 54 conditions, R = 0.921. NN-ensemble is a fast calcium calculator, ideal for multiscale clotting simulations that include spatiotemporal concentrations of ADP, collagen, thrombin, thromboxane, prostacyclin, and nitric oxide.
Jakub Prokop
2011-09-01
Full Text Available Three new palaeopteran insects are described from the Middle Permian (Guadalupian of Salagou Formation in the Lodève Basin (South of France, viz. the diaphanopterodean Alexrasnitsyniidae fam. n., based on Alexrasnitsynia permiana gen. et sp. n., the Parelmoidae Permelmoa magnifica gen. et sp. n., and Lodevohymen lapeyriei gen. et sp. n. (in Megasecoptera or Diaphanopterodea, family undetermined. In addition the first record of mayflies attributed to family Syntonopteridae (Ephemeroptera is reported. These new fossils clearly demonstrate that the present knowledge of the Permian insects remains very incomplete. They also confirm that the Lodève entomofauna was highly diverse providing links to other Permian localities and also rather unique, with several families still not recorded in other contemporaneous outcrops.
Impurity in Pairwise Entanglement of Heisenberg ⅩⅩ Open Chain
无
2007-01-01
We calculate the concurrence of all pairwise entanglement of Heisenberg ⅩⅩ open chain with single system impurity in three-qubit and four-qubit cases, and find that the impurity parameter Ji has great effect on pairwise entanglement. Choosing the proper parameter Ji, we can obtain the maximal pairwise entanglement of the nearest qubits and make the non-nearest qubits entangle.
Xi Xiao-Qiang; Liu Wu-Ming
2007-01-01
Based on the calculation of all the pairwise entanglements in the n(n≤6)-qubit Heisenberg ⅩⅩ open chain with system impurity, we find an important result: pairwise entanglement can only be transferred by an entangled pair. The non-nearest pairwise entanglements will have the possibility to exist as long as there has been even number of qubits in their middle. This point indicates that we can obtain longer distance entanglement in a solid system.
A discriminative learning framework with pairwise constraints for video object classification.
Yan, Rong; Zhang, Jian; Yang, Jie; Hauptmann, Alexander G
2006-04-01
To deal with the problem of insufficient labeled data in video object classification, one solution is to utilize additional pairwise constraints that indicate the relationship between two examples, i.e., whether these examples belong to the same class or not. In this paper, we propose a discriminative learning approach which can incorporate pairwise constraints into a conventional margin-based learning framework. Different from previous work that usually attempts to learn better distance metrics or estimate the underlying data distribution, the proposed approach can directly model the decision boundary and, thus, require fewer model assumptions. Moreover, the proposed approach can handle both labeled data and pairwise constraints in a unified framework. In this work, we investigate two families of pairwise loss functions, namely, convex and nonconvex pairwise loss functions, and then derive three pairwise learning algorithms by plugging in the hinge loss and the logistic loss functions. The proposed learning algorithms were evaluated using a people identification task on two surveillance video data sets. The experiments demonstrated that the proposed pairwise learning algorithms considerably outperform the baseline classifiers using only labeled data and two other pairwise learning algorithms with the same amount of pairwise constraints.
Combined effects of multiple linked loci on pairwise sibling tests.
Tamura, Tomonori; Osawa, Motoki; Kakimoto, Yu; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi
2017-01-01
The advanced multiplex STR system, PowerPlex Fusion, includes four linked locus pairs. The conventional Identifiler system has one pair of linked loci. Therefore, sibling tests conducted using the advanced system might be more affected by linkage than those conducted using the conventional system. This study simulated single and combined effects of the four linked locus pairs on pairwise sibling tests. Simulated genotypes of 100,000 pairs of full siblings and nonrelatives were constructed according to allele frequencies of the Japanese population. The single linkage effect was evaluated for simulated genotype data by calculating both the likelihood ratio accounting for the linkage between two loci and the likelihood ratio ignoring the linkage. The combined effect was obtained by multiplication of the respective single effects. Furthermore, we investigated the possibility that ignoring the linkage affects subject classification by introducing a scale of the likelihood ratio into sibling tests. The single effect in the Identifiler analysis was 0.645-1.746 times if the linkage was ignored. Overestimations and underestimations were predictable from the identical-by-state status at two linked loci. The combined effect in the PowerPlex Fusion analysis was 0.217-7.390 times. Ignoring the linkage rarely caused a false conclusive or inconclusive result, even from PowerPlex Fusion analysis. Application of the advanced system improved sibling tests considerably. The additional examined loci were more beneficial than the adverse effect of the linkage derived from the four linked locus pairs.
The Dependence of the Pairwise Velocity Dispersion on Galaxy Properties
Li, C; Kauffmann, G; Börner, G; White, S D M; Cheng, F Z; Li, Cheng; Kauffmann, Guinevere; Boerner, Gerhard; White, Simon D.M.
2006-01-01
(abridged) We present measurements of the pairwise velocity dispersion (PVD) for different classes of galaxies in the Sloan Digital Sky Survey. For a sample of about 200,000 galaxies, we study the dependence of the PVD on galaxy properties such as luminosity, stellar mass (M_*), colour (g-r), 4000A break strength (D4000), concentration index (C), and stellar surface mass density (\\mu_*). The luminosity dependence of the PVD is in good agreement with the results of Jing & B\\"orner (2004) for the 2dFGRS catalog. The value of \\sigma_{12} measured at k=1 h/Mpc decreases as a function of increasing galaxy luminosity for galaxies fainter than L*, before increasing again for the most luminous galaxies in our sample. This behaviour is not reproduced using standard halo occupation distribution (HOD) models. Each of the galaxy subsamples selected according to luminosity or stellar mass is divided into two further subsamples according to colour, D4000, C and \\mu_*. We find that galaxies with redder colours and highe...
Multiple-instance learning with pairwise instance similarity
Yuan Liming
2014-09-01
Full Text Available Multiple-Instance Learning (MIL has attracted much attention of the machine learning community in recent years and many real-world applications have been successfully formulated as MIL problems. Over the past few years, several Instance Selection-based MIL (ISMIL algorithms have been presented by using the concept of the embedding space. Although they delivered very promising performance, they often require long computation times for instance selection, leading to a low efficiency of the whole learning process. In this paper, we propose a simple and efficient ISMIL algorithm based on the similarity of pairwise instances within a bag. The basic idea is selecting from every training bag a pair of the most similar instances as instance prototypes and then mapping training bags into the embedding space that is constructed from all the instance prototypes. Thus, the MIL problem can be solved with the standard supervised learning techniques, such as support vector machines. Experiments show that the proposed algorithm is more efficient than its competitors and highly comparable with them in terms of classification accuracy. Moreover, the testing of noise sensitivity demonstrates that our MIL algorithm is very robust to labeling noise
Exploiting pair-wise constraints between parts for human tracking
Zhang, Jin; Shen, Xiaohui; Zhou, Jie; Rong, Gang
2007-11-01
Human tracking has attracted much attention from the researchers in the fields of computer vision and pattern recognition. The problem is generally extremely challenging partly because human bodies are articulated and versatile, and partly because background clutter, both of which demand a strong human model. However, there is usually a trade-off between the discriminative power and the complexity of a given model. This paper presents a simple yet distinctive appearance model for real time human tracking by exploiting the pairwise constraints between parts. The parts in our model are generated online by sampling the foreground of the scene into overlapping blocks and grouping them into appearance coherent parts with mean shift algorithm. Constraints between the resulting parts are defined and used to encode the structure of human body. To tolerate the possible human deformations and occlusions, the model is layered. With this model, we design an algorithm for human tracking and test its performance on real world image sequences. Experimental results show that the proposed appearance model although simple, has enough discriminative power to classify multiple humans even in presence of occlusions and the associated tracking method can run in real time.
Equating a Large-Scale Writing Assessment Using Pairwise Comparisons of Performances
Humphry, Stephen M.; McGrane, Joshua A.
2015-01-01
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
GraphAlignment: Bayesian pairwise alignment of biological networks
Kolář Michal
2012-11-01
Full Text Available Abstract Background With increased experimental availability and accuracy of bio-molecular networks, tools for their comparative and evolutionary analysis are needed. A key component for such studies is the alignment of networks. Results We introduce the Bioconductor package GraphAlignment for pairwise alignment of bio-molecular networks. The alignment incorporates information both from network vertices and network edges and is based on an explicit evolutionary model, allowing inference of all scoring parameters directly from empirical data. We compare the performance of our algorithm to an alternative algorithm, Græmlin 2.0. On simulated data, GraphAlignment outperforms Græmlin 2.0 in several benchmarks except for computational complexity. When there is little or no noise in the data, GraphAlignment is slower than Græmlin 2.0. It is faster than Græmlin 2.0 when processing noisy data containing spurious vertex associations. Its typical case complexity grows approximately as O(N2.6. On empirical bacterial protein-protein interaction networks (PIN and gene co-expression networks, GraphAlignment outperforms Græmlin 2.0 with respect to coverage and specificity, albeit by a small margin. On large eukaryotic PIN, Græmlin 2.0 outperforms GraphAlignment. Conclusions The GraphAlignment algorithm is robust to spurious vertex associations, correctly resolves paralogs, and shows very good performance in identification of homologous vertices defined by high vertex and/or interaction similarity. The simplicity and generality of GraphAlignment edge scoring makes the algorithm an appropriate choice for global alignment of networks.
BETASCAN: probable beta-amyloids identified by pairwise probabilistic analysis.
Allen W Bryan
2009-03-01
Full Text Available Amyloids and prion proteins are clinically and biologically important beta-structures, whose supersecondary structures are difficult to determine by standard experimental or computational means. In addition, significant conformational heterogeneity is known or suspected to exist in many amyloid fibrils. Recent work has indicated the utility of pairwise probabilistic statistics in beta-structure prediction. We develop here a new strategy for beta-structure prediction, emphasizing the determination of beta-strands and pairs of beta-strands as fundamental units of beta-structure. Our program, BETASCAN, calculates likelihood scores for potential beta-strands and strand-pairs based on correlations observed in parallel beta-sheets. The program then determines the strands and pairs with the greatest local likelihood for all of the sequence's potential beta-structures. BETASCAN suggests multiple alternate folding patterns and assigns relative a priori probabilities based solely on amino acid sequence, probability tables, and pre-chosen parameters. The algorithm compares favorably with the results of previous algorithms (BETAPRO, PASTA, SALSA, TANGO, and Zyggregator in beta-structure prediction and amyloid propensity prediction. Accurate prediction is demonstrated for experimentally determined amyloid beta-structures, for a set of known beta-aggregates, and for the parallel beta-strands of beta-helices, amyloid-like globular proteins. BETASCAN is able both to detect beta-strands with higher sensitivity and to detect the edges of beta-strands in a richly beta-like sequence. For two proteins (Abeta and Het-s, there exist multiple sets of experimental data implying contradictory structures; BETASCAN is able to detect each competing structure as a potential structure variant. The ability to correlate multiple alternate beta-structures to experiment opens the possibility of computational investigation of prion strains and structural heterogeneity of amyloid
LR characterization of chirotopes of finite planar families of pairwise disjoint convex bodies
Habert, Luc; Pocchiola, Michel
2011-01-01
We extend the classical LR characterization of chirotopes of finite planar families of points to chirotopes of finite planar families of pairwise disjoint convex bodies: a map \\c{hi} on the set of 3-subsets of a finite set I is a chirotope of finite planar families of pairwise disjoint convex bodies if and only if for every 3-, 4-, and 5-subset J of I the restriction of \\c{hi} to the set of 3-subsets of J is a chirotope of finite planar families of pairwise disjoint convex bodies. Our main to...
Sets of unit vectors with small pairwise sums
Swanepoel, Konrad J
2010-01-01
We study the sizes of delta-additive sets of unit vectors in a d-dimensional normed space: the sum of any two vectors has norm at most delta. One-additive sets originate in finding upper bounds of vertex degrees of Steiner Minimum Trees in finite dimensional smooth normed spaces (Z. F\\"uredi, J. C. Lagarias, F. Morgan, 1991). We show that the maximum size of a delta-additive set over all normed spaces of dimension d grows exponentially in d for fixed delta>2/3, stays bounded for delta<2/3, and grows linearly at the threshold delta=2/3. Furthermore, the maximum size of a 2/3-additive set in d-dimensional normed space has the sharp upper bound of d, with the single exception of spaces isometric to three-dimensional l^1 space, where there exists a 2/3-additive set of four unit vectors.
Roelens, Baptiste; Schvarzstein, Mara; Villeneuve, Anne M
2015-12-01
Meiotic chromosome segregation requires pairwise association between homologs, stabilized by the synaptonemal complex (SC). Here, we investigate factors contributing to pairwise synapsis by investigating meiosis in polyploid worms. We devised a strategy, based on transient inhibition of cohesin function, to generate polyploid derivatives of virtually any Caenorhabditis elegans strain. We exploited this strategy to investigate the contribution of recombination to pairwise synapsis in tetraploid and triploid worms. In otherwise wild-type polyploids, chromosomes first sort into homolog groups, then multipartner interactions mature into exclusive pairwise associations. Pairwise synapsis associations still form in recombination-deficient tetraploids, confirming a propensity for synapsis to occur in a strictly pairwise manner. However, the transition from multipartner to pairwise association was perturbed in recombination-deficient triploids, implying a role for recombination in promoting this transition when three partners compete for synapsis. To evaluate the basis of synapsis partner preference, we generated polyploid worms heterozygous for normal sequence and rearranged chromosomes sharing the same pairing center (PC). Tetraploid worms had no detectable preference for identical partners, indicating that PC-adjacent homology drives partner choice in this context. In contrast, triploid worms exhibited a clear preference for identical partners, indicating that homology outside the PC region can influence partner choice. Together, our findings, suggest a two-phase model for C. elegans synapsis: an early phase, in which initial synapsis interactions are driven primarily by recombination-independent assessment of homology near PCs and by a propensity for pairwise SC assembly, and a later phase in which mature synaptic interactions are promoted by recombination.
Combinación de Valores de Longitud del Día (LOD) según ventanas de frecuencia
Fernández, L. I.; Arias, E. F.; Gambis, D.
El concepto de solución combinada se sustenta en el hecho de que las diferentes series temporales de datos derivadas a partir de distintas técnicas de la Geodesia Espacial son muy disimiles entre si. Las principales diferencias, fácilmente detectables, entre las distintas series son: diferente intervalo de muestreo, extensión temporal y calidad. Los datos cubren un período reciente de 27 meses (julio 96-oct. 98). Se utilizaron estimaciones de la longitud del día (LOD) originadas en 10 centros operativos del IERS (International Earth Rotation Service) a partir de las técnicas GPS (Global Positioning System) y SLR (Satellite Laser Ranging). La serie temporal combinada así obtenida se comparó con la solución EOP (Parámetros de la Orientación Terrestre) combinada multi-técnica derivada por el IERS (C04). El comportamiento del ruido en LOD para todas las técnicas mostró ser dependiente de la frecuencia (Vondrak, 1998). Por esto, las series dato se dividieron en ventanas de frecuencia, luego de haberles removido bies y tendencias. Luego, se asignaron diferentes factores de peso a cada ventana discriminando por técnicas. Finalmente estas soluciones parcialmente combinadas se mezclaron para obtener la solución combinada final. Sabemos que la mejor solución combinada tendrá una precisión menor que la precisión de las series temporales de datos que la originaron. Aun así, la importancia de una serie combinada confiable de EOP, esto es, de una precisión aceptable y libre de sistematismos evidentes, radica en la necesidad de una base de datos EOP de referencia para el estudio de fenómenos geofísicos que motivan variaciones en la rotación terrestre.
Using Parameters of Dynamic Pulse Function for 3d Modeling in LOD3 Based on Random Textures
Alizadehashrafi, B.
2015-12-01
The pulse function (PF) is a technique based on procedural preprocessing system to generate a computerized virtual photo of the façade with in a fixed size square(Alizadehashrafi et al., 2009, Musliman et al., 2010). Dynamic Pulse Function (DPF) is an enhanced version of PF which can create the final photo, proportional to real geometry. This can avoid distortion while projecting the computerized photo on the generated 3D model(Alizadehashrafi and Rahman, 2013). The challenging issue that might be handled for having 3D model in LoD3 rather than LOD2, is the final aim that have been achieved in this paper. In the technique based DPF the geometries of the windows and doors are saved in an XML file schema which does not have any connections with the 3D model in LoD2 and CityGML format. In this research the parameters of Dynamic Pulse Functions are utilized via Ruby programming language in SketchUp Trimble to generate (exact position and deepness) the windows and doors automatically in LoD3 based on the same concept of DPF. The advantage of this technique is automatic generation of huge number of similar geometries e.g. windows by utilizing parameters of DPF along with defining entities and window layers. In case of converting the SKP file to CityGML via FME software or CityGML plugins the 3D model contains the semantic database about the entities and window layers which can connect the CityGML to MySQL(Alizadehashrafi and Baig, 2014). The concept behind DPF, is to use logical operations to project the texture on the background image which is dynamically proportional to real geometry. The process of projection is based on two vertical and horizontal dynamic pulses starting from upper-left corner of the background wall in down and right directions respectively based on image coordinate system. The logical one/zero on the intersections of two vertical and horizontal dynamic pulses projects/does not project the texture on the background image. It is possible to define
Yi Long
2015-06-01
Full Text Available This paper describes a novel strategy for the visualization of hyperspectral imagery based on the analysis of image pixel pairwise distances. The goal of this approach is to generate a final color image with excellent interpretability and high contrast at the cost of distorting a few pairwise distances. Specifically, the principle of equal variance is introduced to divide all hyperspectral bands into three subgroups and to ensure the energy is distributed uniformly between them, as in natural color images. Then, after detecting both normal and outlier pixels, these three subgroups are mapped into three color components of the output visualization using two different mapping (i.e., dimensionality reduction schemes for the two types of pixels. The widely-used multidimensional scaling (MDS is used for normal pixels and a new objective function, taking into account the weighting of pairwise distances, is presented for the outlier pixels. The pairwise distance weighting is designed such that small pairwise distances between the outliers and their respective neighbors are emphasized and large deviations are suppressed. This produces an image with high contrast and good interpretability while retaining the detailed information content. The proposed algorithm is compared with several state-of-the-art visualization techniques and evaluated on the well-known AVIRIS hyperspectral images. The effectiveness of the proposed strategy is substantiated both visually and quantitatively.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Tarpine, Ryan; Lam, Fumei; Istrail, Sorin
We present results on two classes of problems. The first result addresses the long standing open problem of finding unifying principles for Linkage Disequilibrium (LD) measures in population genetics (Lewontin 1964 [10], Hedrick 1987 [8], Devlin and Risch 1995 [5]). Two desirable properties have been proposed in the extensive literature on this topic and the mutual consistency between these properties has remained at the heart of statistical and algorithmic difficulties with haplotype and genome-wide association study analysis. The first axiom is (1) The ability to extend LD measures to multiple loci as a conservative extension of pairwise LD. All widely used LD measures are pairwise measures. Despite significant attempts, it is not clear how to naturally extend these measures to multiple loci, leading to a "curse of the pairwise". The second axiom is (2) The Interpretability of Intermediate Values. In this paper, we resolve this mutual consistency problem by introducing a new LD measure, directed informativeness overrightarrow{I} (the directed graph theoretic counterpart of the informativeness measure introduced by Halldorsson et al. [6]) and show that it satisfies both of the above axioms. We also show the maximum informative subset of tagging SNPs based on overrightarrow{I} can be computed exactly in polynomial time for realistic genome-wide data. Furthermore, we present polynomial time algorithms for optimal genome-wide tagging SNPs selection for a number of commonly used LD measures, under the bounded neighborhood assumption for linked pairs of SNPs. One problem in the area is the search for a quality measure for tagging SNPs selection that unifies the LD-based methods such as LD-select (implemented in Tagger, de Bakker et al. 2005 [4], Carlson et al. 2004 [3]) and the information-theoretic ones such as informativeness. We show that the objective function of the LD-select algorithm is the Minimal Dominating Set (MDS) on r 2-SNP graphs and show that we can
Tartakovsky, Alexandre M.; Panchenko, Alexander
2016-01-01
We present a novel formulation of the Pairwise Force Smoothed Particle Hydrodynamics Model (PF-SPH) and use it to simulate two- and three-phase flows in bounded domains. In the PF-SPH model, the Navier-Stokes equations are discretized with the Smoothed Particle Hydrodynamics (SPH) method and the Young-Laplace boundary condition at the fluid-fluid interface and the Young boundary condition at the fluid-fluid-solid interface are replaced with pairwise forces added into the Navier-Stokes equations. We derive a relationship between the parameters in the pairwise forces and the surface tension and static contact angle. Next, we demonstrate the accuracy of the model under static and dynamic conditions. Finally, to demonstrate the capabilities and robustness of the model we use it to simulate flow of three fluids in a porous material.
Pairwise Comparison and Distance Measure of Hesitant Fuzzy Linguistic Term Sets
Han-Chen Huang
2014-01-01
Full Text Available A hesitant fuzzy linguistic term set (HFLTS, allowing experts using several possible linguistic terms to assess a qualitative linguistic variable, is very useful to express people’s hesitancy in practical decision-making problems. Up to now, a little research has been done on the comparison and distance measure of HFLTSs. In this paper, we present a comparison method for HFLTSs based on pairwise comparisons of each linguistic term in the two HFLTSs. Then, a distance measure method based on the pairwise comparison matrix of HFLTSs is proposed, and we prove that this distance is equal to the distance of the average values of HFLTSs, which makes the distance measure much more simple. Finally, the pairwise comparison and distance measure methods are utilized to develop two multicriteria decision-making approaches under hesitant fuzzy linguistic environments. The results analysis shows that our methods in this paper are more reasonable.
Tartakovsky, Alexandre M.; Panchenko, Alexander
2016-01-01
We present a novel formulation of the Pairwise Force Smoothed Particle Hydrodynamics (PF-SPH) model and use it to simulate two- and three-phase flows in bounded domains. In the PF-SPH model, the Navier-Stokes equations are discretized with the Smoothed Particle Hydrodynamics (SPH) method, and the Young-Laplace boundary condition at the fluid-fluid interface and the Young boundary condition at the fluid-fluid-solid interface are replaced with pairwise forces added into the Navier-Stokes equations. We derive a relationship between the parameters in the pairwise forces and the surface tension and static contact angle. Next, we demonstrate the model's accuracy under static and dynamic conditions. Finally, we use the Pf-SPH model to simulate three phase flow in a porous medium.
On the gradual deployment of random pairwise key distribution schemes (Extended Version)
Yagan, Osman
2011-01-01
In the context of wireless sensor networks, the pairwise key distribution scheme of Chan et al. has several advantages over other key distribution schemes including the original scheme of Eschenauer and Gligor. However, this offline pairwise key distribution mechanism requires that the network size be set in advance, and involves all sensor nodes simultaneously. Here, we address this issue by describing an implementation of the pairwise scheme that supports the gradual deployment of sensor nodes in several consecutive phases. We discuss the key ring size needed to maintain the secure connectivity throughout all the deployment phases. In particular we show that the number of keys at each sensor node can be taken to be $O(\\log n)$ in order to achieve secure connectivity (with high probability).
Xia, Xuhua
2016-09-01
While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing.
SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.
Brejnev Muhizi Muhire
Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.
Pairwise FCM based feature weighting for improved classification of vertebral column disorders.
Unal, Yavuz; Polat, Kemal; Erdinc Kocer, H
2014-03-01
In this paper, an innovative data pre-processing method to improve the classification performance and to determine automatically the vertebral column disorders including disk hernia (DH), spondylolisthesis (SL) and normal (NO) groups has been proposed. In the classification of vertebral column disorders' dataset with three classes, a pairwise fuzzy C-means (FCM) based feature weighting method has been proposed. In this method, first of all, the vertebral column dataset has been grouped as pairwise (DH-SL, DH-NO, and SL-NO) and then these pairwise groups have been weighted using a FCM based feature set. These weighted groups have been classified using classifier algorithms including multilayer perceptron (MLP), k-nearest neighbor (k-NN), Naive Bayes, and support vector machine (SVM). The general classification performance has been obtained by averaging of classification accuracies obtained from pairwise classifier algorithms. To evaluate the performance of the proposed method, the classification accuracy, sensitivity, specificity, ROC curves, and f-measure have been used. Without the proposed feature weighting, the obtained f-measure values were 0.7738 for MLP classifier, 0.7021 for k-NN, 0.7263 for Naive Bayes, and 0.7298 for SVM classifier algorithms in the classification of vertebral column disorders' dataset with three classes. With the pairwise fuzzy C-means based feature weighting method, the obtained f-measure values were 0.9509 for MLP, 0.9313 for k-NN, 0.9603 for Naive Bayes, and 0.9468 for SVM classifier algorithms. The experimental results demonstrated that the proposed pairwise fuzzy C-means based feature weighting method is robust and effective in the classification of vertebral column disorders' dataset. In the future, this method could be used confidently for medical datasets with more classes.
Calculation of Pairwise Thermal Entanglement for Odd Qubit ⅩⅩ Chain
无
2007-01-01
Using Jordan-Winger transformation and finite Fourier transformation, the Hamiltonian of ⅩⅩ chain can be transformed to the diagonal form. If the total qubits number N = odd, the internal energy (U), and correlation function Gzz in thcrmal equilibrium can be calculated. Then the pairwise concurrence and the critical value βcr can be calculated.For N ≥ 5 ⅩⅩ chains, there are pairwise concurrences in thermal equilibrium for antiferromagnetic and ferromagnetic cases when β＞βcr.
Yukawa Masahiro
2006-01-01
Full Text Available In stereophonic acoustic echo cancellation (SAEC problem, fast and accurate tracking of echo path is strongly required for stable echo cancellation. In this paper, we propose a class of efficient fast SAEC schemes with linear computational complexity (with respect to filter length. The proposed schemes are based on pairwise optimal weight realization (POWER technique, thus realizing a "best" strategy (in the sense of pairwise and worst-case optimization to use multiple-state information obtained by preprocessing. Numerical examples demonstrate that the proposed schemes significantly improve the convergence behavior compared with conventional methods in terms of system mismatch as well as echo return loss enhancement (ERLE.
Marion, Zachary H; Fordyce, James A; Fitzpatrick, Benjamin M
2017-01-30
Beta diversity is an important metric in ecology quantifying differentiation or disparity in composition among communities, ecosystems, or phenotypes. To compare systems with different sizes (N, number of units within a system), beta diversity is often converted to related indices such as turnover or local/regional differentiation. Here we use simulations to demonstrate that these naive measures of dissimilarity depend on sample size and design. We show that when N is the number of sampled units (e.g., quadrats) rather than the "true" number of communities in the system (if such exists), these differentiation measures are biased estimators. We propose using average pairwise dissimilarity as an intuitive solution. That is, instead of attempting to estimate an N-community measure, we advocate estimating the expected dissimilarity between any random pairs of communities (or sampling units)-especially when the "true" N is unknown or undefined. Fortunately, measures of pairwise dissimilarity or overlap have been used in ecology for decades, and their properties are well known. Using the same simulations, we show that average pairwise metrics give consistent and unbiased estimates regardless of the number of survey units sampled. We advocate pairwise dissimilarity as a general standardization to ensure commensurability of different study systems. This article is protected by copyright. All rights reserved.
Revisiting the classification of curtoviruses based on genome-wide pairwise identity
Varsani, Arvind
2014-01-25
Members of the genus Curtovirus (family Geminiviridae) are important pathogens of many wild and cultivated plant species. Until recently, relatively few full curtovirus genomes have been characterised. However, with the 19 full genome sequences now available in public databases, we revisit the proposed curtovirus species and strain classification criteria. Using pairwise identities coupled with phylogenetic evidence, revised species and strain demarcation guidelines have been instituted. Specifically, we have established 77% genome-wide pairwise identity as a species demarcation threshold and 94% genome-wide pairwise identity as a strain demarcation threshold. Hence, whereas curtovirus sequences with >77% genome-wide pairwise identity would be classified as belonging to the same species, those sharing >94% identity would be classified as belonging to the same strain. We provide step-by-step guidelines to facilitate the classification of newly discovered curtovirus full genome sequences and a set of defined criteria for naming new species and strains. The revision yields three curtovirus species: Beet curly top virus (BCTV), Spinach severe surly top virus (SpSCTV) and Horseradish curly top virus (HrCTV). © 2014 Springer-Verlag Wien.
On the calculation of x-ray scattering signals from pairwise radial distribution functions
Dohn, Asmus Ougaard; Biasin, Elisa; Haldrup, Kristoffer;
2015-01-01
We derive a formulation for evaluating (time-resolved) x-ray scattering signals of solvated chemical systems, based on pairwise radial distribution functions, with the aim of this formulation to accompany molecular dynamics simulations. The derivation is described in detail to eliminate any possi...
The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search
Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan
2005-01-01
FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural R...
Boetker, Johan P.; Koradia, Vishal; Rades, Thomas;
2012-01-01
was subjected to quench cooling thereby creating an amorphous form of the drug from both starting materials. The milled and quench cooled samples were, together with the crystalline starting materials, analyzed with X-ray powder diffraction (XRPD), Raman spectroscopy and atomic pair-wise distribution function...
Linear VSS and Distributed Commitments Based on Secret Sharing and Pairwise Checks
Fehr, Serge; Maurer, Ueli M.
2002-01-01
We present a general treatment of all non-cryptographic (i.e., information-theoretically secure) linear veriable-secret-sharing (VSS) and distributed-commitment (DC) schemes, based on an underlying secret sharing scheme, pairwise checks between players, complaints, and accusations of the dealer. ...
Fai, S.; Rafeiro, J.
2014-05-01
In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.
Classification between normal and tumor tissues based on the pair-wise gene expression ratio
Wong YC
2004-10-01
Full Text Available Abstract Background Precise classification of cancer types is critically important for early cancer diagnosis and treatment. Numerous efforts have been made to use gene expression profiles to improve precision of tumor classification. However, reliable cancer-related signals are generally lacking. Method Using recent datasets on colon and prostate cancer, a data transformation procedure from single gene expression to pair-wise gene expression ratio is proposed. Making use of the internal consistency of each expression profiling dataset this transformation improves the signal to noise ratio of the dataset and uncovers new relevant cancer-related signals (features. The efficiency in using the transformed dataset to perform normal/tumor classification was investigated using feature partitioning with informative features (gene annotation as discriminating axes (single gene expression or pair-wise gene expression ratio. Classification results were compared to the original datasets for up to 10-feature model classifiers. Results 82 and 262 genes that have high correlation to tissue phenotype were selected from the colon and prostate datasets respectively. Remarkably, data transformation of the highly noisy expression data successfully led to lower the coefficient of variation (CV for the within-class samples as well as improved the correlation with tissue phenotypes. The transformed dataset exhibited lower CV when compared to that of single gene expression. In the colon cancer set, the minimum CV decreased from 45.3% to 16.5%. In prostate cancer, comparable CV was achieved with and without transformation. This improvement in CV, coupled with the improved correlation between the pair-wise gene expression ratio and tissue phenotypes, yielded higher classification efficiency, especially with the colon dataset – from 87.1% to 93.5%. Over 90% of the top ten discriminating axes in both datasets showed significant improvement after data transformation. The
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Model reductions for inference: generality of pairwise, binary, and planar factor graphs.
Eaton, Frederik; Ghahramani, Zoubin
2013-05-01
We offer a solution to the problem of efficiently translating algorithms between different types of discrete statistical model. We investigate the expressive power of three classes of model-those with binary variables, with pairwise factors, and with planar topology-as well as their four intersections. We formalize a notion of "simple reduction" for the problem of inferring marginal probabilities and consider whether it is possible to "simply reduce" marginal inference from general discrete factor graphs to factor graphs in each of these seven subclasses. We characterize the reducibility of each class, showing in particular that the class of binary pairwise factor graphs is able to simply reduce only positive models. We also exhibit a continuous "spectral reduction" based on polynomial interpolation, which overcomes this limitation. Experiments assess the performance of standard approximate inference algorithms on the outputs of our reductions.
Pairwise-Svm for On-Board Urban Road LIDAR Classification
Shu, Zhen; Sun, Kai; Qiu, Kaijin; Ding, Kou
2016-06-01
The common method of LiDAR classifications is Markov random fields (MRF). Based on construction of MRF energy function, spectral and directional features are extracted for on-board urban point clouds. The MRF energy function is consisted of unary and pairwise potentials. The unary terms are computed by SVM classifictaion. The initial labeling is mainly processed through geometrical shapes. The pairwise potential is estimated by Naïve Bayes. From training data, the probability of adjacent objects is computed by prior knowledge. The final labeling method is reweighted message-passing to minimization the energy function. The MRF model is difficult to process the large-scale misclassification. We propose a super-voxel clustering method for over-segment and grouping segment for large objects. Trees, poles ground, and building are classified in this paper. The experimental results show that this method improves the accuracy of classification and speed of computation.
On the calculation of x-ray scattering signals from pairwise radial distribution functions
Dohn, Asmus Ougaard; Biasin, Elisa; Haldrup, Kristoffer;
2015-01-01
We derive a formulation for evaluating (time-resolved) x-ray scattering signals of solvated chemical systems, based on pairwise radial distribution functions, with the aim of this formulation to accompany molecular dynamics simulations. The derivation is described in detail to eliminate any possi...... possible ambiguities, and the result includes a modification to the atom-type formulation which to our knowledge is previously unaccounted for. The formulation is numerically implemented and validated.......We derive a formulation for evaluating (time-resolved) x-ray scattering signals of solvated chemical systems, based on pairwise radial distribution functions, with the aim of this formulation to accompany molecular dynamics simulations. The derivation is described in detail to eliminate any...
Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics
Leimkuhler, Benedict
2016-01-01
We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nos\\'{e}--Hoover--Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees--Edwards boundary conditions to...
Foo KaeY
2010-01-01
Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.
Pairwise correlations via quantum discord and its geometric measure in a four-qubit spin chain
Abdel-Baset A. Mohamed
2013-04-01
Full Text Available The dynamic of pairwise correlations, including quantum entanglement (QE and discord (QD with geometric measure of quantum discord (GMQD, are shown in the four-qubit Heisenberg XX spin chain. The results show that the effect of the entanglement degree of the initial state on the pairwise correlations is stronger for alternate qubits than it is for nearest-neighbor qubits. This parameter results in sudden death for QE, but it cannot do so for QD and GMQD. With different values for this entanglement parameter of the initial state, QD and GMQD differ and are sensitive for any change in this parameter. It is found that GMQD is more robust than both QD and QE to describe correlations with nonzero values, which offers a valuable resource for quantum computation.
Pan, Dongbo; Lu, Xi; Liu, Juan; Deng, Yong
2014-01-01
Decision-making, as a way to discover the preference of ranking, has been used in various fields. However, owing to the uncertainty in group decision-making, how to rank alternatives by incomplete pairwise comparisons has become an open issue. In this paper, an improved method is proposed for ranking of alternatives by incomplete pairwise comparisons using Dempster-Shafer evidence theory and information entropy. Firstly, taking the probability assignment of the chosen preference into consideration, the comparison of alternatives to each group is addressed. Experiments verified that the information entropy of the data itself can determine the different weight of each group's choices objectively. Numerical examples in group decision-making environments are used to test the effectiveness of the proposed method. Moreover, the divergence of ranking mechanism is analyzed briefly in conclusion section. PMID:25250393
Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics
Leimkuhler, Benedict; Shang, Xiaocheng
2016-11-01
We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.
Kae Y. Foo
2010-01-01
Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.
DIALIGN P: Fast pair-wise and multiple sequence alignment using parallel processors
Kaufmann Michael; Nieselt Kay; Schmollinger Martin; Morgenstern Burkhard
2004-01-01
Abstract Background Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Results Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pa...
Carreno, Victor A.
2015-01-01
Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation
Zhixin Yang
2013-01-01
Full Text Available A reliable fault diagnostic system for gas turbine generator system (GTGS, which is complicated and inherent with many types of component faults, is essential to avoid the interruption of electricity supply. However, the GTGS diagnosis faces challenges in terms of the existence of simultaneous-fault diagnosis and high cost in acquiring the exponentially increased simultaneous-fault vibration signals for constructing the diagnostic system. This research proposes a new diagnostic framework combining feature extraction, pairwise-coupled probabilistic classifier, and decision threshold optimization. The feature extraction module adopts wavelet packet transform and time-domain statistical features to extract vibration signal features. Kernel principal component analysis is then applied to further reduce the redundant features. The features of single faults in a simultaneous-fault pattern are extracted and then detected using a probabilistic classifier, namely, pairwise-coupled relevance vector machine, which is trained with single-fault patterns only. Therefore, the training dataset of simultaneous-fault patterns is unnecessary. To optimize the decision threshold, this research proposes to use grid search method which can ensure a global solution as compared with traditional computational intelligence techniques. Experimental results show that the proposed framework performs well for both single-fault and simultaneous-fault diagnosis and is superior to the frameworks without feature extraction and pairwise coupling.
A Parallel Genetic Algorithm Based on Spark for Pairwise Test Suite Generation
Rong-Zhi Qi; Zhi-Jian Wang; Shui-Yan Li
2016-01-01
Pairwise testing is an effective test generation technique that requires all pairs of parameter values to be covered by at least one test case. It has been proven that generating minimum test suite is an NP-complete problem. Genetic algorithms have been used for pairwise test suite generation by researchers. However, it is always a time-consuming process, which leads to significant limitations and obstacles for practical use of genetic algorithms towards large-scale test problems. Parallelism will be an effective way to not only enhance the computation performance but also improve the quality of the solutions. In this paper, we use Spark, a fast and general parallel computing platform, to parallelize the genetic algorithm to tackle the problem. We propose a two-phase parallelization algorithm including fitness evaluation parallelization and genetic operation parallelization. Experimental results show that our algorithm outperforms the sequential genetic algorithm and competes with other approaches in both test suite size and computational performance. As a result, our algorithm is a promising improvement of the genetic algorithm for pairwise test suite generation.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Collective behaviours in the stock market -- A maximum entropy approach
Bury, Thomas
2014-01-01
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial...
Wingender Edgar
2008-05-01
Full Text Available Abstract Background Currently, there is a gap between purely theoretical studies of the topology of large bioregulatory networks and the practical traditions and interests of experimentalists. While the theoretical approaches emphasize the global characterization of regulatory systems, the practical approaches focus on the role of distinct molecules and genes in regulation. To bridge the gap between these opposite approaches, one needs to combine 'general' with 'particular' properties and translate abstract topological features of large systems into testable functional characteristics of individual components. Here, we propose a new topological parameter – the pairwise disconnectivity index of a network's element – that is capable of such bridging. Results The pairwise disconnectivity index quantifies how crucial an individual element is for sustaining the communication ability between connected pairs of vertices in a network that is displayed as a directed graph. Such an element might be a vertex (i.e., molecules, genes, an edge (i.e., reactions, interactions, as well as a group of vertices and/or edges. The index can be viewed as a measure of topological redundancy of regulatory paths which connect different parts of a given network and as a measure of sensitivity (robustness of this network to the presence (absence of each individual element. Accordingly, we introduce the notion of a path-degree of a vertex in terms of its corresponding incoming, outgoing and mediated paths, respectively. The pairwise disconnectivity index has been applied to the analysis of several regulatory networks from various organisms. The importance of an individual vertex or edge for the coherence of the network is determined by the particular position of the given element in the whole network. Conclusion Our approach enables to evaluate the effect of removing each element (i.e., vertex, edge, or their combinations from a network. The greatest potential value of
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
Sharma Gaurav
2007-04-01
Full Text Available Abstract Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for
Pairwise contact energy statistical potentials can help to find probability of point mutations.
Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S
2017-01-01
To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β)8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than iMutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Pairwise local structural alignment of RNA sequences with sequence similarity less than 40%
Havgaard, Jakob Hull; Lyngsø, Rune B.; Stormo, Gary D.
2005-01-01
Motivation: Searching for non-coding RNA (ncRNA) genes and structural RNA elements (eleRNA) are major challenges in gene finding todya as these often are conserved in structure rather than in sequence. Even though the number of available methods is growing, it is still of interest to pairwise....... The structure prediction performance for a family is typically around 0.7 using Matthews correlation coefficient. In case (2), the algorithm is successful at locating RNA families with an average sensitivity of 0.8 and a positive predictive value of 0.9 using a BLAST-like hit selection scheme. Availability...
Boetker, Johan P.; Koradia, Vishal; Rades, Thomas
2012-01-01
was subjected to quench cooling thereby creating an amorphous form of the drug from both starting materials. The milled and quench cooled samples were, together with the crystalline starting materials, analyzed with X-ray powder diffraction (XRPD), Raman spectroscopy and atomic pair-wise distribution function...... (PDF) analysis of the XRPD pattern. When compared to XRPD and Raman spectroscopy, the PDF analysis was superior in displaying the difference between the amorphous samples prepared by milling and quench cooling approaches of the two starting materials....
Li, Yujie; Dai, Yue; Shi, Yu
2017-02-01
Quantum entanglement is the characteristic quantum correlation. Here, we use this concept to analyze the quantum entanglement generated by Schwinger production of particle-antiparticle pairs in an electric field, as well as the change of entanglement as a consequence of the electric field effect on a pre-existing entangled pair of particles. The system is partitioned by using momentum modes. Various kinds of pairwise mode entanglement are calculated as functions of the electric field. Both constant and pulsed electric fields are considered. The use of entanglement exposes information beyond that in particle number distributions.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Walton, Jay R; Rivera-Rivera, Luis A; Lucchese, Robert R; Bevan, John W
2016-05-26
Force-based canonical approaches have recently given a unified but different viewpoint on the nature of bonding in pairwise interatomic interactions. Differing molecular categories (covalent, ionic, van der Waals, hydrogen, and halogen bonding) of representative interatomic interactions with binding energies ranging from 1.01 to 1072.03 kJ/mol have been modeled canonically giving a rigorous semiempirical verification to high accuracy. However, the fundamental physical basis expected to provide the inherent characteristics of these canonical transformations has not yet been elucidated. Subsequently, it was shown through direct numerical differentiation of these potentials that their associated force curves have canonical shapes. However, this approach to analyzing force results in inherent loss of accuracy coming from numerical differentiation of the potentials. We now show that this serious obstruction can be avoided by directly demonstrating the canonical nature of force distributions from the perspective of the Hellmann-Feynman theorem. This requires only differentiation of explicitly known Coulombic potentials, and we discuss how this approach to canonical forces can be used to further explain the nature of chemical bonding in pairwise interatomic interactions. All parameter values used in the canonical transformation are determined through explicit physical based algorithms, and it does not require direct consideration of electron correlation effects.
Wilkinson, Robert R; Sharkey, Kieran J
2016-01-01
We consider a generalised form of Karrer and Newman's (Phys. Rev. E 82, 016101, 2010) message passing representation of S(E)IR dynamics and show that this, and hence the original system of Karrer and Newman, has a unique feasible solution. The rigorous bounds on the stochastic dynamics, and exact results for trees, first obtained by Karrer and Newman, still hold in this more general setting. We also derive an expression which provides a rigorous lower bound on the variance of the number of susceptibles at any time for trees. By applying the message passing approach to stochastic SIR dynamics on symmetric graphs, we then obtain several key results. Firstly we obtain a low-dimensional message passing system comprising of only four equations. From this system, by assuming that transmission processes are Poisson and independent of the recovery processes, we derive a non-Markovian pairwise model which gives exactly the same infectious time series as the message passing system. Thus, this pairwise model provides th...
Physicochemical property distributions for accurate and rapid pairwise protein homology detection
Oehmen Christopher S
2010-03-01
Full Text Available Abstract Background The challenge of remote homology detection is that many evolutionarily related sequences have very little similarity at the amino acid level. Kernel-based discriminative methods, such as support vector machines (SVMs, that use vector representations of sequences derived from sequence properties have been shown to have superior accuracy when compared to traditional approaches for the task of remote homology detection. Results We introduce a new method for feature vector representation based on the physicochemical properties of the primary protein sequence. A distribution of physicochemical property scores are assembled from 4-mers of the sequence and normalized based on the null distribution of the property over all possible 4-mers. With this approach there is little computational cost associated with the transformation of the protein into feature space, and overall performance in terms of remote homology detection is comparable with current state-of-the-art methods. We demonstrate that the features can be used for the task of pairwise remote homology detection with improved accuracy versus sequence-based methods such as BLAST and other feature-based methods of similar computational cost. Conclusions A protein feature method based on physicochemical properties is a viable approach for extracting features in a computationally inexpensive manner while retaining the sensitivity of SVM protein homology detection. Furthermore, identifying features that can be used for generic pairwise homology detection in lieu of family-based homology detection is important for applications such as large database searches and comparative genomics.
Statistical properties of pairwise distances between leaves on a random Yule tree.
Sheinman, Michael; Massip, Florian; Arndt, Peter F
2015-01-01
A Yule tree is the result of a branching process with constant birth and death rates. Such a process serves as an instructive null model of many empirical systems, for instance, the evolution of species leading to a phylogenetic tree. However, often in phylogeny the only available information is the pairwise distances between a small fraction of extant species representing the leaves of the tree. In this article we study statistical properties of the pairwise distances in a Yule tree. Using a method based on a recursion, we derive an exact, analytic and compact formula for the expected number of pairs separated by a certain time distance. This number turns out to follow a increasing exponential function. This property of a Yule tree can serve as a simple test for empirical data to be well described by a Yule process. We further use this recursive method to calculate the expected number of the n-most closely related pairs of leaves and the number of cherries separated by a certain time distance. To make our results more useful for realistic scenarios, we explicitly take into account that the leaves of a tree may be incompletely sampled and derive a criterion for poorly sampled phylogenies. We show that our result can account for empirical data, using two families of birds species.
Statistical properties of pairwise distances between leaves on a random Yule tree.
Michael Sheinman
Full Text Available A Yule tree is the result of a branching process with constant birth and death rates. Such a process serves as an instructive null model of many empirical systems, for instance, the evolution of species leading to a phylogenetic tree. However, often in phylogeny the only available information is the pairwise distances between a small fraction of extant species representing the leaves of the tree. In this article we study statistical properties of the pairwise distances in a Yule tree. Using a method based on a recursion, we derive an exact, analytic and compact formula for the expected number of pairs separated by a certain time distance. This number turns out to follow a increasing exponential function. This property of a Yule tree can serve as a simple test for empirical data to be well described by a Yule process. We further use this recursive method to calculate the expected number of the n-most closely related pairs of leaves and the number of cherries separated by a certain time distance. To make our results more useful for realistic scenarios, we explicitly take into account that the leaves of a tree may be incompletely sampled and derive a criterion for poorly sampled phylogenies. We show that our result can account for empirical data, using two families of birds species.
Polansky, Leo; Wittemyer, George
2011-03-06
The study of collective or group-level movement patterns can provide insight regarding the socio-ecological interface, the evolution of self-organization and mechanisms of inter-individual information exchange. The suite of drivers influencing coordinated movement trajectories occur across scales, resulting from regular annual, seasonal and circadian stimuli and irregular intra- or interspecific interactions and environmental encounters acting on individuals. Here, we promote a conceptual framework with an associated statistical machinery to quantify the type and degree of synchrony, spanning absence to complete, in pairwise movements. The application of this framework offers a foundation for detailed understanding of collective movement patterns and causes. We emphasize the use of Fourier and wavelet approaches of measuring pairwise movement properties and illustrate them with simulations that contain different types of complexity in individual movement, correlation in movement stochasticity, and transience in movement relatedness. Application of this framework to movements of free-ranging African elephants (Loxodonta africana) provides unique insight on the separate roles of sociality and ecology in the fission-fusion society of these animals, quantitatively characterizing the types of bonding that occur at different levels of social relatedness in a movement context. We conclude with a discussion about expanding this framework to the context of larger (greater than three) groups towards understanding broader population and interspecific collective movement patterns and their mechanisms.
Pairwise Check Decoding for LDPC Coded Two-Way Relay Block Fading Channels
Liu, Jianquan; Xu, Youyun
2011-01-01
Partial decoding is known to have the potential to achieve a larger rate region than that of full decoding in two-way relay (TWR) channels. Existing partial decoding realizations are however designed for Gaussian channels and with a static physical layer network coding (PLNC) mapping. In this paper, we propose a new channel coding solution at the relay, called \\emph{pairwise check decoding} (PCD), for low-density parity-check (LDPC) coded TWR system over block fading channels. The main idea is to form a check relationship table (check-relation-tab) for the superimposed LDPC coded packet pair in the multiple access (MA) phase in conjunction with an adaptive PLNC mapping in the broadcast (BC) phase. Using PCD, we then present a partial decoding method, two-stage closest-neighbor clustering with PCD (TS-CNC-PCD), with the aim of minimizing the worst pairwise error performance. Moreover, a kind of correlative rows optimization, named as the minimum correlation optimization (MCO), is proposed for selecting the bet...
MARS: computing three-dimensional alignments for multiple ligands using pairwise similarities.
Klabunde, Thomas; Giegerich, Clemens; Evers, Andreas
2012-08-27
The three-dimensional (3D) superimposition of molecules of one biological target reflecting their relative bioactive orientation is key for several ligand-based drug design studies (e.g., QSAR studies, pharmacophore modeling). However, with the lack of sufficient ligand-protein complex structures, an experimental alignment is difficult or often impossible to obtain. Several computational 3D alignment tools have been developed by academic or commercial groups to address this challenge. Here, we present a new approach, MARS (Multiple Alignments by ROCS-based Similarity), that is based on the pairwise alignment of all molecules within the data set using the tool ROCS (Rapid Overlay of Chemical Structures). Each pairwise alignment is scored, and the results are captured in a score matrix. The ideal superimposition of the compounds in the set is then identified by the analysis of the score matrix building stepwise a superimposition of all molecules. The algorithm exploits similarities among all molecules in the data set to compute an optimal 3D alignment. This alignment tool presented here can be used for several applications, including pharmacophore model generation, 3D QSAR modeling, 3D clustering, identification of structural outliers, and addition of compounds to an already existing alignment. Case studies are shown, validating the 3D alignments for six different data sets.
Pairwise Operator Learning for Patch Based Single-image Super-resolution.
Tang, Yi; Shao, Ling
2016-12-14
Motivated by the fact that image patches could be inherently represented by matrices, single-image super-resolution is treated as a problem of learning regression operators in a matrix space in this paper. The regression operators that map low-resolution image patches to high-resolution image patches are generally defined by left and right multiplication operators. The pairwise operators are respectively used to extract the raw and column information of low-resolution image patches for recovering high-resolution estimations. The patch based regression algorithm possesses three favorable properties. Firstly, the proposed super-resolution algorithm is efficient during both training and testing, because image patches are treated as matrices. Secondly, the data storage requirement of the optimal pairwise operator is far less than most popular single-image super-resolution algorithms because only two small sized matrices need to be stored. Lastly, the super-resolution performance is competitive with most popular single-image super-resolution algorithms because both raw and column information of image patches is considered. Experimental results show the efficiency and effectiveness of the proposed patch-based single-image superresolution algorithm.
A water market simulator considering pair-wise trades between agents
Huskova, I.; Erfani, T.; Harou, J. J.
2012-04-01
In many basins in England no further water abstraction licences are available. Trading water between water rights holders has been recognized as a potentially effective and economically efficient strategy to mitigate increasing scarcity. A screening tool that could assess the potential for trade through realistic simulation of individual water rights holders would help assess the solution's potential contribution to local water management. We propose an optimisation-driven water market simulator that predicts pair-wise trade in a catchment and represents its interaction with natural hydrology and engineered infrastructure. A model is used to emulate licence-holders' willingness to engage in short-term trade transactions. In their simplest form agents are represented using an economic benefit function. The working hypothesis is that trading behaviour can be partially predicted based on differences in marginal values of water over space and time and estimates of transaction costs on pair-wise trades. We discuss the further possibility of embedding rules, norms and preferences of the different water user sectors to more realistically represent the behaviours, motives and constraints of individual licence holders. The potential benefits and limitations of such a social simulation (agent-based) approach is contrasted with our simulator where agents are driven by economic optimization. A case study based on the Dove River Basin (UK) demonstrates model inputs and outputs. The ability of the model to suggest impacts of water rights policy reforms on trading is discussed.
Pairwise Interaction Extended Point Particle (PIEP) Model for a Random Array of Spheres
Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan; CenterCompressible Multiphase Turbulence Team
2016-11-01
This study investigates a flow past random array of spherical particles. The understanding of the governing forces within these arrays is crucial for obtaining accurate models used in particle-laden simulations. These models have to faithfully reflect the sub-grid interactions between the particles and the continuous phase. The models being used today assumes an average force on all particles within the array based on the mean volume fraction and Reynolds number. Here, we develop a model which can compute the drag and lateral forces on each particle by accounting for the precise location of few surrounding neighbors. A pairwise interaction is assumed where the perturbation flow induced by each neighbor is considered separately, then the effect of all neighbors are linearly superposed to obtain the total perturbation. Faxén correction is used to quantify the force perturbation due to the presence of the neighbors. The single neighbor perturbations are mapped in the vicinity of a reference sphere and stored as libraries. We test the Pairwise Interaction Extended Point-Particle (PIEP) model for random arrays at two different volume fractions of ϕ = 0 . 1 and 0.21 and Reynolds number in the range 16 DNS performed using immersed boundary method. We observe the PIEP model prediction to correlate much better with the DNS results than the classical mean drag model prediction.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
基于四叉树的动态LOD虚拟地形优化%Optimized Design for Dynamic LOD Virtual Terrain Using Quad Tree
邹承明; 李引; 陆苑; 陈金锐
2009-01-01
In this paper, we present a novel approach of optimized design for dynamic LOD virtual terrain based on quad tree. In the process of building quad tree which on the basis of multi-resolution terrain model, we firstly optimize the quad tree of a mesh respectively according to the three criteria of the quad tree, that is bounding volume omitting, face culling and projection error analysis, Then we set up a suitable node evaluation function, according to the different degrees of details to show the viewpoint moves when we are roaming the LOD terrain. To remove the unreasonable quad tree partition, we put forward an appropriate cracks elimination based on the law of segmentation and rendering in the process of LOD mode simplification. Finally after the partition of the nodes optimally, we achieve the goal of optimizing the quad tree mesh.%提出了一种基于四叉树的动态LOD优化方法.基于多分辨率地形模型的四叉树构建过程,首先针对四叉树优化采用的3个判断标准--包围体剔除、背面剔除和屏幕投影误差分别进行相应的网格优化,然后在LOD地形中漫游时,随着视点移动而呈现出的不同细节程度的需要,建立了合适的节点评价函数,并在LOD简化过程中根据节点分割和渲染的规律,提出合适的裂缝消除算法,去掉四叉树中不合理的分割,最后使得分割后的节点达到最优以完成四叉树网格优化的目的.
Hou, Fujun
2016-01-01
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.
Fast pairwise structural RNA alignments by pruning of the dynamical programming matrix
Havgaard, Jakob Hull; Torarinsson, Elfar; Gorodkin, Jan
2007-01-01
genomes. One main problem with these methods is their computational complexity, and heuristics are therefore employed. Two heuristics are currently very popular: pre-folding and pre-aligning. However, these heuristics are not ideal, as pre-aligning is dependent on sequence similarity that may...... the advantage of providing the constraints dynamically. This has been included in a new implementation of the FOLDALIGN algorithm for pairwise local or global structural alignment of RNA sequences. It is shown that time and memory requirements are dramatically lowered while overall performance is maintained....... Furthermore, a new divide and conquer method is introduced to limit the memory requirement during global alignment and backtrack of local alignment. All branch points in the computed RNA structure are found and used to divide the structure into smaller unbranched segments. Each segment is then realigned...
General parity between trio and pairwise breeding of laboratory mice in static caging.
Kedl, Ross M; Wysocki, Lawrence J; Janssen, William J; Born, Willi K; Rosenbaum, Matthew D; Granowski, Julia; Kench, Jennifer A; Fong, Derek L; Switzer, Lisa A; Cruse, Margaret; Huang, Hua; Jakubzick, Claudia V; Kosmider, Beata; Takeda, Katsuyuki; Stranova, Thomas J; Klumm, Randal C; Delgado, Christine; Tummala, Saigiridhar; De Langhe, Stijn; Cambier, John; Haskins, Katherine; Lenz, Laurel L; Curran-Everett, Douglas
2014-11-15
Changes made in the 8th edition of the Guide for the Care and Use of Laboratory Animals included new recommendations for the amount of space for breeding female mice. Adopting the new recommendations required, in essence, the elimination of trio breeding practices for all institutions. Both public opinion and published data did not readily support the new recommendations. In response, the National Jewish Health Institutional Animal Care and Use Committee established a program to directly compare the effects of breeding format on mouse pup survival and growth. Our study showed an overall parity between trio and pairwise breeding formats on the survival and growth of the litters, suggesting that the housing recommendations for breeding female mice as stated in the current Guide for the Care and Use of Laboratory Animals should be reconsidered.
Pickering, William; Lim, Chjan
2017-07-01
We investigate a family of urn models that correspond to one-dimensional random walks with quadratic transition probabilities that have highly diverse applications. Well-known instances of these two-urn models are the Ehrenfest model of molecular diffusion, the voter model of social influence, and the Moran model of population genetics. We also provide a generating function method for diagonalizing the corresponding transition matrix that is valid if and only if the underlying mean density satisfies a linear differential equation and express the eigenvector components as terms of ordinary hypergeometric functions. The nature of the models lead to a natural extension to interaction between agents in a general network topology. We analyze the dynamics on uncorrelated heterogeneous degree sequence networks and relate the convergence times to the moments of the degree sequences for various pairwise interaction mechanisms.
Modeling the pairwise key distribution scheme in the presence of unreliable links
Yagan, Osman
2011-01-01
We investigate the secure connectivity of wireless sensor networks under the pairwise key distribution scheme of Chan et al.. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on/off channels. We present conditions on how to scale the model parameters so that the network i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of zero-one laws, and exhibit significant differences with corresponding results in the full visibility case. Through simulations these zero-one laws are shown to be valid also under a more realistic communication model, i.e., the disk model.
Matrix multiplication operations using pair-wise load and splat operations
Eichenberger, Alexandre E.; Gschwind, Michael K.; Gunnels, John A.; Salapura, Valentina
2017-03-21
Mechanisms for performing a matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A pair-wise load and splat operation is performed to load a pair of scalar values of a second vector operand and replicate the pair of scalar values within a second target vector register. An operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored. This operation may be repeated for a second pair of scalar values of the second vector operand.
On the non-convergence of energy intensities. Evidence from a pair-wise econometric approach
Le Pen, Yannick [Universite de Nantes LEMNA, Nantes (France); Sevi, Benoit [Universite d' Angers GRANEM, Faculte de Droit, Economie et Gestion, Universite d' Angers, 13 allee Francois Mitterrand, BP 13633, 49036 Angers cedex 01 (France)
2010-01-15
This paper evaluates the convergence of energy intensities for a group of 97 countries in the period 1971-2003. Convergence is tested using a recent method proposed by Pesaran (2007) [Pesaran, M.H., 2007. A pair-wise approach to testing for output and growth convergence. Journal of Econometrics 138, 312-355] based on the stochastic convergence criterion. An advantage of this method is that results do not depend on a benchmark against which convergence is assessed. It gives more robust results. Applications of several unit-root tests as well as a stationarity test uniformly reject the global convergence hypothesis. Locally, for Middle East, OECD and Europe sub-groups, non-convergence is less strongly rejected. The introduction of possible structural breaks in the analysis only marginally provides more support to the convergence hypothesis. (author)
PairWise Neighbours database: overlaps and spacers among prokaryote genomes
Garcia-Vallvé Santiago
2009-06-01
Full Text Available Abstract Background Although prokaryotes live in a variety of habitats and possess different metabolic and genomic complexity, they have several genomic architectural features in common. The overlapping genes are a common feature of the prokaryote genomes. The overlapping lengths tend to be short because as the overlaps become longer they have more risk of deleterious mutations. The spacers between genes tend to be short too because of the tendency to reduce the non coding DNA among prokaryotes. However they must be long enough to maintain essential regulatory signals such as the Shine-Dalgarno (SD sequence, which is responsible of an efficient translation. Description PairWise Neighbours is an interactive and intuitive database used for retrieving information about the spacers and overlapping genes among bacterial and archaeal genomes. It contains 1,956,294 gene pairs from 678 fully sequenced prokaryote genomes and is freely available at the URL http://genomes.urv.cat/pwneigh. This database provides information about the overlaps and their conservation across species. Furthermore, it allows the wide analysis of the intergenic regions providing useful information such as the location and strength of the SD sequence. Conclusion There are experiments and bioinformatic analysis that rely on correct annotations of the initiation site. Therefore, a database that studies the overlaps and spacers among prokaryotes appears to be desirable. PairWise Neighbours database permits the reliability analysis of the overlapping structures and the study of the SD presence and location among the adjacent genes, which may help to check the annotation of the initiation sites.
The pairwise velocity difference of over 2000 BHB stars in the Milky Way halo
Xiang-Xiang Xue; Hans-Walter Rixm; Gang Zhao
2009-01-01
Models of hierarchical galaxy formation predict that the extended stellar halos of galaxies like our Milky Way show a great deal of sub-structure,arising from disrupted satellites.Spatial sub-structure is directly observed,and has been quantified,in the Milky Way's stellar halo.Phase-space conservation implies that there should be sub-structure in position-velocity space.Here,we aim to quantify such position-velocity sub-structure,using a state-of-the art data set having over 2000 blue horizontal branch (BHB) stars with photometry and spectroscopy from SDSS.For stars in dynamically cold streams ("young"streams),we expect that pairs of objects that are physically close also have similar velocities.Therefore,we apply the well-established "pairwise velocity difference" (PVD)statistic (△r),where we expect to drop for small separations △r.We calculate the PVD for the SDSS BHB sample and find (△r)≈ const.,i.e.no such signal.By making mock-observations of the simulations by Bullock & Johnston and applying the same statistic,we show that for individual,dynamically young streams,or assemblages of such streams,drops for small distance separations △r,as qualitatively expected.However,for a realistic complete set of halo streams,the pair-wise velocity difference shows no signal,as the simulated halos are dominated by "dynamically old" phase-mixed streams.Our findings imply that the sparse sampling and the sample sizes in SDSS DR6 are still insufficient to use the position-velocity sub-structure for a stringent quantitative data-model comparison.Therefore,alternate statistics must be explored and much more densely sampled surveys,dedicated to the structure of the Milky Way,such as LAMOST,are needed.
Are ionic liquids pairwise in gas phase? A cluster approach and in situ IR study.
Dong, Kun; Zhao, Lidong; Wang, Qian; Song, Yuting; Zhang, Suojiang
2013-04-28
In this work, we discussed the vaporization and gas species of ionic liquids (ILs) by a cluster approach of quantum statistical thermodynamics proposed by R. Luwig (Phys. Chem. Chem. Phys., 10, 4333), which is a controversial issue up to date. Based on the different sized clusters (2-12 ion-pairs) of the condensed phase, the molar enthalpies of vaporization (ΔvapH, 298.15 K, 1bar) of four representative ILs, 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([Emim][NTf2]) 1-ethyl-2,3-dimethylimidazolium bis(trifluoromethylsulfonyl)imide ([Emmim][NTf2]) 1-ethyl-3-methylimidazolium chloride ([Emim]Cl) and ethylammonium nitrate ([EtAm][NO3]), were calculated. The predicted ΔvapH were increased remarkably; even the values of [EtAm][NO3] were larger than 700 kJ mol(-1) when the charged isolated ions were assumed to be gas species. However, the ΔvapH were close to experimental measurements when the gas species assumed to be anion-cation pairwise, indicating that the different conformational ion-pairs can coexist in the gas phase when the IL is evaporated. Particularly for the protic IL, [EtAm][NO3], even the neutral precursor molecules by proton transfer can occur in gas phase. In addition, it's found that the effect of hydrogen bonds on the vaporization cannot be negligible by comparing the ΔvapH of [Emim][NTf2] with [Emmim][NTf2]. The in situ and calculated IR spectra provided the further proof that the ions are pairwise in gas phase.
基于LOD的自适应无裂缝地形渲染%Adaptive terrain rendering with no T-adjacent based on LOD
郭虎奇; 费向东; 刘小玲
2013-01-01
提出了一种新型三角形簇作为GPU的图元绘制单元,结合LOD技术实现了自适应的无裂缝地形渲染.该三角形簇,称为N-簇,分为8种基本类型,不同尺寸和位置的地形网格块都可以通过这8种基本类型进行缩放和平移得到.采用二叉树数据结构组织N-簇,每个二叉树节点对应一种N-簇,同时存储了N-簇的缩放及平移.结合八边形误差算法进行场景LOD的构建,避免了不同LOD层次间过滤产生的T-连接.由于大规模地形的高程数据量及纹理数据量非常庞大,不能一次性载入内存,采用四叉树数据结构分块组织高程数据和纹理数据,在程序运行时进行数据块的动态加载.实验结果表明,N-簇提高了地形三角形网格的绘制效率,同时,整个算法能自适应地进行无裂缝地形渲染,并能满足大规模地形场景实时绘制的要求.%A new kind of triangle cluster, as the render unit of GPU is proposed, combined the LOD technology, which realizes the adaptive terrain rendering with no crack. The new kind of triangle cluster, called N-cluster, has eight base types and the terrain mesh with different size and location can translated from the base types with scaling and translating. Binary tree is used to organize N-cluster, each node contains the information of N-cluster, including type, scale and translation. Octagon metric is utilized to construct LOD of terrain, which can avoid the T-adjacent between different LOD. Because of the massive data of DEM and texture data, which cannot be loaded into memory once, the quad tree is used to organize them and the data mesh is loaded into memory dynamically when running. The experimental result shows that, N-cluster improves the efficiency of terrain rendering, and the total algorithm can adaptively rendering terrain without crack, which can also meet the requirement of real-time rendering of large-scale terrain.
Gene ontology analysis of pairwise genetic associations in two genome-wide studies of sporadic ALS
Kim Nora
2012-07-01
Full Text Available Abstract Background It is increasingly clear that common human diseases have a complex genetic architecture characterized by both additive and nonadditive genetic effects. The goal of the present study was to determine whether patterns of both additive and nonadditive genetic associations aggregate in specific functional groups as defined by the Gene Ontology (GO. Results We first estimated all pairwise additive and nonadditive genetic effects using the multifactor dimensionality reduction (MDR method that makes few assumptions about the underlying genetic model. Statistical significance was evaluated using permutation testing in two genome-wide association studies of ALS. The detection data consisted of 276 subjects with ALS and 271 healthy controls while the replication data consisted of 221 subjects with ALS and 211 healthy controls. Both studies included genotypes from approximately 550,000 single-nucleotide polymorphisms (SNPs. Each SNP was mapped to a gene if it was within 500 kb of the start or end. Each SNP was assigned a p-value based on its strongest joint effect with the other SNPs. We then used the Exploratory Visual Analysis (EVA method and software to assign a p-value to each gene based on the overabundance of significant SNPs at the α = 0.05 level in the gene. We also used EVA to assign p-values to each GO group based on the overabundance of significant genes at the α = 0.05 level. A GO category was determined to replicate if that category was significant at the α = 0.05 level in both studies. We found two GO categories that replicated in both studies. The first, ‘Regulation of Cellular Component Organization and Biogenesis’, a GO Biological Process, had p-values of 0.010 and 0.014 in the detection and replication studies, respectively. The second, ‘Actin Cytoskeleton’, a GO Cellular Component, had p-values of 0.040 and 0.046 in the detection and replication studies, respectively. Conclusions Pathway
Lin, Nan Xuan; Henley, William Edward
2016-12-10
Observational studies provide a rich source of information for assessing effectiveness of treatment interventions in many situations where it is not ethical or practical to perform randomized controlled trials. However, such studies are prone to bias from hidden (unmeasured) confounding. A promising approach to identifying and reducing the impact of unmeasured confounding is prior event rate ratio (PERR) adjustment, a quasi-experimental analytic method proposed in the context of electronic medical record database studies. In this paper, we present a statistical framework for using a pairwise approach to PERR adjustment that removes bias inherent in the original PERR method. A flexible pairwise Cox likelihood function is derived and used to demonstrate the consistency of the simple and convenient alternative PERR (PERR-ALT) estimator. We show how to estimate standard errors and confidence intervals for treatment effect estimates based on the observed information and provide R code to illustrate how to implement the method. Assumptions required for the pairwise approach (as well as PERR) are clarified, and the consequences of model misspecification are explored. Our results confirm the need for researchers to consider carefully the suitability of the method in the context of each problem. Extensions of the pairwise likelihood to more complex designs involving time-varying covariates or more than two periods are considered. We illustrate the application of the method using data from a longitudinal cohort study of enzyme replacement therapy for lysosomal storage disorders. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Pair-Wise Trajectory Management-Oceanic (PTM-O) . [Concept of Operations—Version 3.9
Jones, Kenneth M.
2014-01-01
This document describes the Pair-wise Trajectory Management-Oceanic (PTM-O) Concept of Operations (ConOps). Pair-wise Trajectory Management (PTM) is a concept that includes airborne and ground-based capabilities designed to enable and to benefit from, airborne pair-wise distance-monitoring capability. PTM includes the capabilities needed for the controller to issue a PTM clearance that resolves a conflict for a specific pair of aircraft. PTM avionics include the capabilities needed for the flight crew to manage their trajectory relative to specific designated aircraft. Pair-wise Trajectory Management PTM-Oceanic (PTM-O) is a regional specific application of the PTM concept. PTM is sponsored by the National Aeronautics and Space Administration (NASA) Concept and Technology Development Project (part of NASA's Airspace Systems Program). The goal of PTM is to use enhanced and distributed communications and surveillance along with airborne tools to permit reduced separation standards for given aircraft pairs, thereby increasing the capacity and efficiency of aircraft operations at a given altitude or volume of airspace.
Pairwise selection assembly for sequence-independent construction of long-length DNA.
Blake, William J; Chapman, Brad A; Zindal, Anuradha; Lee, Michael E; Lippow, Shaun M; Baynes, Brian M
2010-05-01
The engineering of biological components has been facilitated by de novo synthesis of gene-length DNA. Biological engineering at the level of pathways and genomes, however, requires a scalable and cost-effective assembly of DNA molecules that are longer than approximately 10 kb, and this remains a challenge. Here we present the development of pairwise selection assembly (PSA), a process that involves hierarchical construction of long-length DNA through the use of a standard set of components and operations. In PSA, activation tags at the termini of assembly sub-fragments are reused throughout the assembly process to activate vector-encoded selectable markers. Marker activation enables stringent selection for a correctly assembled product in vivo, often obviating the need for clonal isolation. Importantly, construction via PSA is sequence-independent, and does not require primary sequence modification (e.g. the addition or removal of restriction sites). The utility of PSA is demonstrated in the construction of a completely synthetic 91-kb chromosome arm from Saccharomyces cerevisiae.
DIALIGN P: Fast pair-wise and multiple sequence alignment using parallel processors
Kaufmann Michael
2004-09-01
Full Text Available Abstract Background Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Results Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. Conclusions By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.
EpiGPU: exhaustive pairwise epistasis scans parallelized on consumer level graphics cards.
Hemani, Gibran; Theocharidis, Athanasios; Wei, Wenhua; Haley, Chris
2011-06-01
Hundreds of genome-wide association studies have been performed over the last decade, but as single nucleotide polymorphism (SNP) chip density has increased so has the computational burden to search for epistasis [for n SNPs the computational time resource is O(n(n-1)/2)]. While the theoretical contribution of epistasis toward phenotypes of medical and economic importance is widely discussed, empirical evidence is conspicuously absent because its analysis is often computationally prohibitive. To facilitate resolution in this field, tools must be made available that can render the search for epistasis universally viable in terms of hardware availability, cost and computational time. By partitioning the 2D search grid across the multicore architecture of a modern consumer graphics processing unit (GPU), we report a 92× increase in the speed of an exhaustive pairwise epistasis scan for a quantitative phenotype, and we expect the speed to increase as graphics cards continue to improve. To achieve a comparable computational improvement without a graphics card would require a large compute-cluster, an option that is often financially non-viable. The implementation presented uses OpenCL--an open-source library designed to run on any commercially available GPU and on any operating system. The software is free, open-source, platformindependent and GPU-vendor independent. It can be downloaded from http://sourceforge.net/projects/epigpu/.
Ferrar, Katia; Maher, Carol; Petkov, John; Olds, Tim
2015-03-01
To date, most health-related time-use research has investigated behaviors in isolation; more recently, however, researchers have begun to conceptualize behaviors in the form of multidimensional patterns or clusters. The study employed 2 techniques: radar graphs and centroid vector length, angles and distance to quantify pairwise time-use cluster similarities among adolescents living in Australia (N = 1853) and in New Zealand (N = 679). Based on radar graph shape, 2 pairs of clusters were similar for both boys and girls. Using vector angles (VA), vector length (VL) and centroid distances (CD), 1 pair for each sex was considered most similar (boys: VA = 63°, VL = 44 and 50 units, and CD = 48 units; girls: VA = 23°, VL = 65 and 85 units, and CD = 36 units). Both methods employed to determine similarity had strengths and weaknesses. The description and quantification of cluster similarity is an important step in the research process. An ability to track and compare clusters may provide greater understanding of complex multidimensional relationships, and in relation to health behavior clusters, present opportunities to monitor and to intervene.
Pairwise energies for polypeptide coarse-grained models derived from atomic force fields
Betancourt, Marcos R.; Omovie, Sheyore J.
2009-05-01
The energy parametrization of geometrically simplified versions of polypeptides, better known as polypeptide or protein coarse-grained models, is obtained from molecular dynamics and statistical methods. Residue pairwise interactions are derived by performing atomic-level simulations in explicit water for all 210 pairs of amino acids, where the amino acids are modified to closer match their structure and charges in polypeptides. Radial density functions are computed from equilibrium simulations for each pair of residues, from which statistical energies are extracted using the Boltzmann inversion method. The resulting models are compared to similar potentials obtained by knowledge based methods and to hydrophobic scales, resulting in significant similarities in spite of the model simplicity. However, it was found that glutamine, asparagine, lysine, and arginine are more attractive to other residues than anticipated, in part, due to their amphiphilic nature. In addition, equally charged residues appear more repulsive than expected. Difficulties in the calculation of knowledge based potentials and hydrophobicity scale for these cases, as well as sensitivity of the force field to polarization effects are suspected to cause this discrepancy. It is also shown that the coarse-grained model can identify native structures in decoy databases nearly as well as more elaborate knowledge based methods, in spite of its resolution limitations. In a test conducted with several proteins and corresponding decoys, the coarse-grained potential was able to identify the native state structure but not the original atomic force field.
The pair-wise velocity dispersion of galaxies effects of non radial motions
Popolo, A D
2001-01-01
I discuss the effect of non-radial motions on the small-scale pairwise peculiar velocity dispersions of galaxies (PVD) in a CDM model. I calculate the PVD for the SCDM model by means of the refined cosmic virial theorem (CVT) (Suto & Jing 1997) and taking account of non-radial motions by means of Del Popolo & Gambera (1998) model. I compare the results of the present model with the data from Davis & Peebles (1983), the IRAS value at 1 h{-1} Mpc of Fisher et al. (1993) and Marzke et al. (1995). I show that while the SCDM model disagrees with the observed values, as pointed out by several authors (Peebles 1976, 1980; Davis & Peebles 1983; Mo et. al 1993; Jing et al. 1998), taking account of non-radial motions produce smaller values for the PVD. At r <=1 h^{-1} Mpc the result is in agreement with Bartlett & Blanchard (1996) (hereafter BB96). At the light of this last paper, the result may be also read as a strong dependence of the CVT prediction on the model chosen to describe the mass dis...
The distribution of pairwise genetic distances: a tool for investigating disease transmission.
Worby, Colin J; Chang, Hsiao-Han; Hanage, William P; Lipsitch, Marc
2014-12-01
Whole-genome sequencing of pathogens has recently been used to investigate disease outbreaks and is likely to play a growing role in real-time epidemiological studies. Methods to analyze high-resolution genomic data in this context are still lacking, and inferring transmission dynamics from such data typically requires many assumptions. While recent studies have proposed methods to infer who infected whom based on genetic distance between isolates from different individuals, the link between epidemiological relationship and genetic distance is still not well understood. In this study, we investigated the distribution of pairwise genetic distances between samples taken from infected hosts during an outbreak. We proposed an analytically tractable approximation to this distribution, which provides a framework to evaluate the likelihood of particular transmission routes. Our method accounts for the transmission of a genetically diverse inoculum, a possibility overlooked in most analyses. We demonstrated that our approximation can provide a robust estimation of the posterior probability of transmission routes in an outbreak and may be used to rule out transmission events at a particular probability threshold. We applied our method to data collected during an outbreak of methicillin-resistant Staphylococcus aureus, ruling out several potential transmission links. Our study sheds light on the accumulation of mutations in a pathogen during an epidemic and provides tools to investigate transmission dynamics, avoiding the intensive computation necessary in many existing methods.
Visualization of pairwise and multilocus linkage disequilibrium structure using latent forests.
Raphaël Mourad
Full Text Available Linkage disequilibrium study represents a major issue in statistical genetics as it plays a fundamental role in gene mapping and helps us to learn more about human history. The linkage disequilibrium complex structure makes its exploratory data analysis essential yet challenging. Visualization methods, such as the triangular heat map implemented in Haploview, provide simple and useful tools to help understand complex genetic patterns, but remain insufficient to fully describe them. Probabilistic graphical models have been widely recognized as a powerful formalism allowing a concise and accurate modeling of dependences between variables. In this paper, we propose a method for short-range, long-range and chromosome-wide linkage disequilibrium visualization using forests of hierarchical latent class models. Thanks to its hierarchical nature, our method is shown to provide a compact view of both pairwise and multilocus linkage disequilibrium spatial structures for the geneticist. Besides, a multilocus linkage disequilibrium measure has been designed to evaluate linkage disequilibrium in hierarchy clusters. To learn the proposed model, a new scalable algorithm is presented. It constrains the dependence scope, relying on physical positions, and is able to deal with more than one hundred thousand single nucleotide polymorphisms. The proposed algorithm is fast and does not require phase genotypic data.
Benefits of Using Pairwise Trajectory Management in the Central East Pacific
Chartrand, Ryan; Ballard, Kathryn
2016-01-01
Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in oceanic regions. The goal of PTM is to use enhanced surveillance, along with airborne tools, to manage the spacing between aircraft. Due to the enhanced airborne surveillance of Automatic Dependent Surveillance-Broadcast (ADS-B) information and reduced communication, the PTM minimum spacing distance will be less than distances currently required of an air traffic controller. Reduced minimum distance will increase the capacity of aircraft operations at a given altitude or volume of airspace, thereby increasing time on desired trajectory and overall flight efficiency. PTM is designed to allow a flight crew to resolve a specific traffic conflict (or conflicts), identified by the air traffic controller, while maintaining the flight crew's desired altitude. The air traffic controller issues a PTM clearance to a flight crew authorized to conduct PTM operations in order to resolve a conflict for the pair (or pairs) of aircraft (i.e., the PTM aircraft and a designated target aircraft). This clearance requires the flight crew of the PTM aircraft to use their ADS-B-enabled onboard equipment to manage their spacing relative to the designated target aircraft to ensure spacing distances that are no closer than the PTM minimum distance. When the air traffic controller determines that PTM is no longer required, the controller issues a clearance to cancel the PTM operation.
Pezeshk Hamid
2010-01-01
Full Text Available Abstract Background Considering energy function to detect a correct protein fold from incorrect ones is very important for protein structure prediction and protein folding. Knowledge-based mean force potentials are certainly the most popular type of interaction function for protein threading. They are derived from statistical analyses of interacting groups in experimentally determined protein structures. These potentials are developed at the atom or the amino acid level. Based on orientation dependent contact area, a new type of knowledge-based mean force potential has been developed. Results We developed a new approach to calculate a knowledge-based potential of mean-force, using pairwise residue contact area. To test the performance of our approach, we performed it on several decoy sets to measure its ability to discriminate native structure from decoys. This potential has been able to distinguish native structures from the decoys in the most cases. Further, the calculated Z-scores were quite high for all protein datasets. Conclusions This knowledge-based potential of mean force can be used in protein structure prediction, fold recognition, comparative modelling and molecular recognition. The program is available at http://www.bioinf.cs.ipm.ac.ir/softwares/surfield
Fast pairwise structural RNA alignments by pruning of the dynamical programming matrix.
Jakob H Havgaard
2007-10-01
Full Text Available It has become clear that noncoding RNAs (ncRNA play important roles in cells, and emerging studies indicate that there might be a large number of unknown ncRNAs in mammalian genomes. There exist computational methods that can be used to search for ncRNAs by comparing sequences from different genomes. One main problem with these methods is their computational complexity, and heuristics are therefore employed. Two heuristics are currently very popular: pre-folding and pre-aligning. However, these heuristics are not ideal, as pre-aligning is dependent on sequence similarity that may not be present and pre-folding ignores the comparative information. Here, pruning of the dynamical programming matrix is presented as an alternative novel heuristic constraint. All subalignments that do not exceed a length-dependent minimum score are discarded as the matrix is filled out, thus giving the advantage of providing the constraints dynamically. This has been included in a new implementation of the FOLDALIGN algorithm for pairwise local or global structural alignment of RNA sequences. It is shown that time and memory requirements are dramatically lowered while overall performance is maintained. Furthermore, a new divide and conquer method is introduced to limit the memory requirement during global alignment and backtrack of local alignment. All branch points in the computed RNA structure are found and used to divide the structure into smaller unbranched segments. Each segment is then realigned and backtracked in a normal fashion. Finally, the FOLDALIGN algorithm has also been updated with a better memory implementation and an improved energy model. With these improvements in the algorithm, the FOLDALIGN software package provides the molecular biologist with an efficient and user-friendly tool for searching for new ncRNAs. The software package is available for download at http://foldalign.ku.dk.
Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)
2015-12-15
New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.
Galland, Nicolas; Kone, Soleymane; Le Questel, Jean-Yves
2012-10-01
A quantitative analysis of the interaction sites of the anti-Alzheimer drug galanthamine with molecular probes (water and benzene molecules) representative of its surroundings in the binding site of acetylcholinesterase (AChE) has been realized through pairwise potentials calculations and quantum chemistry. This strategy allows a full and accurate exploration of the galanthamine potential energy surface of interaction. Significantly different results are obtained according to the distances of approaches between the various molecular fragments and the conformation of the galanthamine N-methyl substituent. The geometry of the most relevant complexes has then been fully optimized through MPWB1K/6-31 + G(d,p) calculations, final energies being recomputed at the LMP2/aug-cc-pVTZ(-f) level of theory. Unexpectedly, galanthamine is found to interact mainly from its hydrogen-bond donor groups. Among those, CH groups in the vicinity of the ammonium group are prominent. The trends obtained provide rationales to the predilection of the equatorial orientation of the galanthamine N-methyl substituent for binding to AChE. The analysis of the interaction energies pointed out the independence between the various interaction sites and the rigid character of galanthamine. The comparison between the cluster calculations and the crystallographic observations in galanthamine-AChE co-crystals allows the validation of the theoretical methodology. In particular, the positions of several water molecules appearing as strongly conserved in galanthamine-AChE co-crystals are predicted by the calculations. Moreover, the experimental position and orientation of lateral chains of functionally important aminoacid residues are in close agreement with the ones predicted theoretically. Our study provides relevant information for a rational drug design of galanthamine based AChE inhibitors.
Galland, Nicolas; Kone, Soleymane; Le Questel, Jean-Yves
2012-10-01
A quantitative analysis of the interaction sites of the anti-Alzheimer drug galanthamine with molecular probes (water and benzene molecules) representative of its surroundings in the binding site of acetylcholinesterase (AChE) has been realized through pairwise potentials calculations and quantum chemistry. This strategy allows a full and accurate exploration of the galanthamine potential energy surface of interaction. Significantly different results are obtained according to the distances of approaches between the various molecular fragments and the conformation of the galanthamine N-methyl substituent. The geometry of the most relevant complexes has then been fully optimized through MPWB1K/6-31 + G(d,p) calculations, final energies being recomputed at the LMP2/aug-cc-pVTZ(-f) level of theory. Unexpectedly, galanthamine is found to interact mainly from its hydrogen-bond donor groups. Among those, CH groups in the vicinity of the ammonium group are prominent. The trends obtained provide rationales to the predilection of the equatorial orientation of the galanthamine N-methyl substituent for binding to AChE. The analysis of the interaction energies pointed out the independence between the various interaction sites and the rigid character of galanthamine. The comparison between the cluster calculations and the crystallographic observations in galanthamine-AChE co-crystals allows the validation of the theoretical methodology. In particular, the positions of several water molecules appearing as strongly conserved in galanthamine-AChE co-crystals are predicted by the calculations. Moreover, the experimental position and orientation of lateral chains of functionally important aminoacid residues are in close agreement with the ones predicted theoretically. Our study provides relevant information for a rational drug design of galanthamine based AChE inhibitors.
Mechanical cell-matrix feedback explains pairwise and collective endothelial cell behavior in vitro.
René F M van Oers
2014-08-01
Full Text Available In vitro cultures of endothelial cells are a widely used model system of the collective behavior of endothelial cells during vasculogenesis and angiogenesis. When seeded in an extracellular matrix, endothelial cells can form blood vessel-like structures, including vascular networks and sprouts. Endothelial morphogenesis depends on a large number of chemical and mechanical factors, including the compliancy of the extracellular matrix, the available growth factors, the adhesion of cells to the extracellular matrix, cell-cell signaling, etc. Although various computational models have been proposed to explain the role of each of these biochemical and biomechanical effects, the understanding of the mechanisms underlying in vitro angiogenesis is still incomplete. Most explanations focus on predicting the whole vascular network or sprout from the underlying cell behavior, and do not check if the same model also correctly captures the intermediate scale: the pairwise cell-cell interactions or single cell responses to ECM mechanics. Here we show, using a hybrid cellular Potts and finite element computational model, that a single set of biologically plausible rules describing (a the contractile forces that endothelial cells exert on the ECM, (b the resulting strains in the extracellular matrix, and (c the cellular response to the strains, suffices for reproducing the behavior of individual endothelial cells and the interactions of endothelial cell pairs in compliant matrices. With the same set of rules, the model also reproduces network formation from scattered cells, and sprouting from endothelial spheroids. Combining the present mechanical model with aspects of previously proposed mechanical and chemical models may lead to a more complete understanding of in vitro angiogenesis.
Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris
2016-01-01
Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding - especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) - Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0-5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars.
Application of LOD Technology in Groundwater Finite Element Post-processing%LOD技术在地下水有限元后处理中的应用
毕振波; 郑爱勤; 崔振东
2011-01-01
地下水有限元后处理阶段的数据量较大,这对模开重现、网络快速传输和计算结果的实时可视化造成困难.为此,分析地下水有限元后处理中面临的主要问题和LOD技术,指出顶点元素删除法是一种适应地下水有限元后处理的有效数据模型简化方法,设计顶点删除和恢复过程中的主要数据结构,并应用DT方法对“空洞”进行局部三角剖分.实例证明该方法在地下水有限元后处理中应用的有效性.%It is difficult for the reproducing, the rapid transmission based on network and the real time visualization of calculated results of finite element models that there is a large amount of data in finite element post-processing stage in groundwater. Therefore, based on analyzing the main problems existing in the post-processing of finite element in groundwater and Level of Detail(LOD) technology, this paper argues that vertex element deletion method is a effective data model simplification techniques adapted for finite element post-processing in groundwater, the main data structures of vertex element deletion and restoring are designed, DT method is used for the local triangulation meshing in the "hole" area. The real implemented example proves the effectiveness of vertex element deletion algorithm applied to finite element post-processing in groundwater.
Effects of Anisotropy on Pair-wise Entanglement of a Four-Qubit Heisenberg X X Z Chain
CAO Min; ZHU Shi-Qun
2006-01-01
@@ The pair-wise thermal entanglement in a four-qubit Heisenberg XXZ chain is investigated to study the role of anisotropy when an external magnetic field is included. It is found that pair-wise entanglement is absent between nearest- and next-nearest neighbouring qubits with anisotropic parameter △≤ -1. For two nearest-neighbouring qubits, increasing the parameter can not only induce the entanglement, but also extend the entanglement region in terms of magnetic field B and temperature T. For two next-nearest-neighbouring qubits, increasing anisotropic parameter can shift the location of the entanglement and control the extent of the entanglement in terms of magnetic field at a finite temperature.
Wu, Shuonan; Xu, Jinchao
2017-08-01
In this paper, the mathematical properties and numerical discretizations of multiphase models that simulate the phase separation of an N-component mixture are studied. For the general choice of phase variables, the unisolvent property of the coefficient matrix involved in the N-phase models based on the pairwise surface tensions is established. Moreover, the symmetric positive-definite property of the coefficient matrix on an (N - 1)-dimensional hyperplane - which is of fundamental importance to the well-posedness of the models - can be proved equivalent to some physical condition for pairwise surface tensions. The N-phase Allen-Cahn and N-phase Cahn-Hilliard equations can then be derived from the free-energy functional. A natural property is that the resulting dynamics of concentrations are independent of phase variables chosen. Finite element discretizations for N-phase models can be obtained as a natural extension of the existing discretizations for the two-phase model. The discrete energy law of the numerical schemes can be proved and numerically observed under some restrictions pertaining to time step size. Numerical experiments including the spinodal decomposition and the evolution of triple junctions are described in order to investigate the effect of pairwise surface tensions.
De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; Coughlin, K.; Datta, R.; Devlin, M.; Dunkley, J.; Dunner, R.; Ferraro, S.; Fox, A.; Gallardo, P. A.; Halpern, M.; Hand, N.; Hasselfield, M.; Henderson, S. W.; Hill, J. C.; Hilton, G. C.; Hilton, M.; Hincks, A. D.; Hlozek, R.; Hubmayr, J.; Huffenberger, K.; Hughes, J. P.; Irwin, K. D.; Koopman, B. J.; Kosowsky, A.; Li, D.; Louis, T.; Lungu, M.; Madhavacheril, M. S.; Maurin, L.; McMahon, J.; Moodley, K.; Naess, S.; Nati, F.; Newburgh, L.; Nibarger, J. P.; Page, L. A.; Partridge, B.; Schaan, E.; Schmitt, B. L.; Sehgal, N.; Sievers, J.; Simon, S. M.; Spergel, D. N.; Staggs, S. T.; Stevens, J. R.; Thornton, R. J.; van Engelen, A.; Van Lanen, J.; Wollack, E. J.
2017-03-01
We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Stimulus-dependent maximum entropy models of neural population codes.
Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Stimulus-dependent maximum entropy models of neural population codes.
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Exact Protein Structure Classification Using the Maximum Contact Map Overlap Metric
Wohlers, Inken; Le Boudic-Jamin, Mathilde; Djidjev, Hristo; Klau, Gunnar; Andonov, Rumen
2014-01-01
In this work we propose a new distance measure for compar-ing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows to avoid pairwise comparisons on the entire database and thus to significantly accelerate exploring the protein space compared to no-metric spaces. We...
Kaur Parminder
2012-08-01
spectrometry data from “bottom up” proteomics methods, functionally related proteins/peptide pairs exhibiting co-ordinated changes expression profile are discovered, which represent a signature for patients progressing to various disease conditions. The method has been tested against clinical data from patients progressing to idiopthatic pneumonia syndrome (IPS following a bone marrow transplant. The data indicates that patients with improper regulation in the concentration of specific acute phase response proteins at the time of bone marrow transplant are highly likely to develop IPS within few weeks. The results lead to a specific set of protein pairs that can be efficiently verified by investigating the pairwise abundance change in independent cohorts using ELISA or targeted mass spectrometry techniques. This generalized classifier can be extended to other clinical problems in a variety of contexts.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Taneda Akito
2008-12-01
Full Text Available Abstract Background Aligning RNA sequences with low sequence identity has been a challenging problem since such a computation essentially needs an algorithm with high complexities for taking structural conservation into account. Although many sophisticated algorithms for the purpose have been proposed to date, further improvement in efficiency is necessary to accelerate its large-scale applications including non-coding RNA (ncRNA discovery. Results We developed a new genetic algorithm, Cofolga2, for simultaneously computing pairwise RNA sequence alignment and consensus folding, and benchmarked it using BRAliBase 2.1. The benchmark results showed that our new algorithm is accurate and efficient in both time and memory usage. Then, combining with the originally trained SVM, we applied the new algorithm to novel ncRNA discovery where we compared S. cerevisiae genome with six related genomes in a pairwise manner. By focusing our search to the relatively short regions (50 bp to 2,000 bp sandwiched by conserved sequences, we successfully predict 714 intergenic and 1,311 sense or antisense ncRNA candidates, which were found in the pairwise alignments with stable consensus secondary structure and low sequence identity (≤ 50%. By comparing with the previous predictions, we found that > 92% of the candidates is novel candidates. The estimated rate of false positives in the predicted candidates is 51%. Twenty-five percent of the intergenic candidates has supports for expression in cell, i.e. their genomic positions overlap those of the experimentally determined transcripts in literature. By manual inspection of the results, moreover, we obtained four multiple alignments with low sequence identity which reveal consensus structures shared by three species/sequences. Conclusion The present method gives an efficient tool complementary to sequence-alignment-based ncRNA finders.
Nese Güler
2009-04-01
Full Text Available The Aim and Significance of the Research: The characteristics that lecturers wish students applying for post-graduate programs should possess are determined in this paper quantitatively through pairwise comparisons according to the lecturers' responses. The fact that resources and studies concerning the issue of scaling are scarcely available has been the most significant driving force for researchers to conduct research on this issue. It is believed that this research will make contributions to the field of scaling, which has limited number of studies. Since this research is a work of scaling which is rarely seen in the field of education, it is thought that the research is significant. Method of Research: The research was conducted on the 129 lecturers working in the different departments of Hacettepe University in the fall and spring semesters in the 2006 - 2007 academic year. At the stage of preparing the tool of measurement, the 7 characteristics that were required students should possess for selection to post-graduate education programs were determined and a tool of measurement through which pairwise comparisons would be made were designed. Consequently, the scale value for each characteristic was marked on the line of numbers. Findings and Comments: According to the pairwise comparison, academic achievement score is in the first order. This is followed by the score gained in the interview, the purpose in entering the department, their level of English proficiency, ALES score, whether or not they are originally the students of the department and whether or not they have a letter of reference, respectively. According to results, when the students are selected to the post-graduate education programs it is suggested that the weighting of the students' characteristics required is made by considering this order. In addition to this, it is thought that studying with different samples and different scaling methods provide important
Chella, Federico; Pizzella, Vittorio; Zappasodi, Filippo; Nolte, Guido; Marzetti, Laura
2016-05-01
Brain cognitive functions arise through the coordinated activity of several brain regions, which actually form complex dynamical systems operating at multiple frequencies. These systems often consist of interacting subsystems, whose characterization is of importance for a complete understanding of the brain interaction processes. To address this issue, we present a technique, namely the bispectral pairwise interacting source analysis (biPISA), for analyzing systems of cross-frequency interacting brain sources when multichannel electroencephalographic (EEG) or magnetoencephalographic (MEG) data are available. Specifically, the biPISA makes it possible to identify one or many subsystems of cross-frequency interacting sources by decomposing the antisymmetric components of the cross-bispectra between EEG or MEG signals, based on the assumption that interactions are pairwise. Thanks to the properties of the antisymmetric components of the cross-bispectra, biPISA is also robust to spurious interactions arising from mixing artifacts, i.e., volume conduction or field spread, which always affect EEG or MEG functional connectivity estimates. This method is an extension of the pairwise interacting source analysis (PISA), which was originally introduced for investigating interactions at the same frequency, to the study of cross-frequency interactions. The effectiveness of this approach is demonstrated in simulations for up to three interacting source pairs and for real MEG recordings of spontaneous brain activity. Simulations show that the performances of biPISA in estimating the phase difference between the interacting sources are affected by the increasing level of noise rather than by the number of the interacting subsystems. The analysis of real MEG data reveals an interaction between two pairs of sources of central mu and beta rhythms, localizing in the proximity of the left and right central sulci.
Koizumi, Itsuro; Yamamoto, Shoichiro; Maekawa, Koji
2006-10-01
Isolation by distance is usually tested by the correlation of genetic and geographic distances separating all pairwise populations' combinations. However, this method can be significantly biased by only a few highly diverged populations and lose the information of individual population. To detect outlier populations and investigate the relative strengths of gene flow and genetic drift for each population, we propose a decomposed pairwise regression analysis. This analysis was applied to the well-described one-dimensional stepping-stone system of stream-dwelling Dolly Varden charr (Salvelinus malma). When genetic and geographic distances were plotted for all pairs of 17 tributary populations, the correlation was significant but weak (r(2) = 0.184). Seven outlier populations were determined based on the systematic bias of the regression residuals, followed by Akaike's information criteria. The best model, 10 populations included, showed a strong pattern of isolation by distance (r(2) = 0.758), suggesting equilibrium between gene flow and genetic drift in these populations. Each outlier population was also analysed by plotting pairwise genetic and geographic distances against the 10 nonoutlier populations, and categorized into one of the three patterns: strong genetic drift, genetic drift with a limited gene flow and a high level of gene flow. These classifications were generally consistent with a priori predictions for each population (physical barrier, population size, anthropogenic impacts). Combined the genetic analysis with field observations, Dolly Varden in this river appeared to form a mainland-island or source-sink metapopulation structure. The generality of the method will merit many types of spatial genetic analyses.
Wang, Hailong; Cao, Leiming; Jing, Jietai
2017-01-10
We theoretically characterize the performance of the pairwise correlations (PCs) from multiple quantum correlated beams based on the cascaded four-wave mixing (FWM) processes. The presence of the PCs with quantum corre- lation in these systems can be verified by calculating the degree of intensity difference squeezing for any pair of all the output fields. The quantum correlation characteristics of all the PCs under different cascaded schemes are also discussed in detail and the repulsion effect between PCs in these cascaded FWM processes is theoretically predicted. Our results open the way for the classification and application of quantum states generated from the cascaded FWM processes.
刘建; 王琪洁; 张昊
2013-01-01
Aiming to resolve the edge effect in the process of predicting length of day (LOD) by the least squares and autoregressive (LS+AR) model,we employed a time series analysis model to extrapolate LOD series and produce a new series.Then,we used the new series to solve the coefficients for the LS model.At last,we used the LS+AR model to predict the LOD series again.By comparing the accuracy of LOD prediction by edge-effect corrected LS +AR and that by LS+AR,we conclude that edge-effect corrected LS+AR can improve the prediction accuracy,especially for medium-term and long-term predictions.%针对LS+AR模型在日长变化预报过程中存在的端部效应现象,采用时间序列分析方法对日长变化的序列进行外推,形成一个新的序列,用这个新序列求得LS模型的系数,然后再用LS+ AR模型对日长变化原始序列进行预报.实验结果表明,利用端部效应改正的LS+ AR模型与LS+ AR模型相比,在日长变化的预报精度上有一定的改善,尤其在跨度为中长期时改善更为明显.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Local image statistics: maximum-entropy constructions and perceptual salience.
Victor, Jonathan D; Conte, Mary M
2012-07-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics--including luminance distributions, pair-wise correlations, and higher-order correlations--are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions.
Kauko, Otto; Laajala, Teemu Daniel; Jumppanen, Mikael; Hintsanen, Petteri; Suni, Veronika; Haapaniemi, Pekka; Corthals, Garry; Aittokallio, Tero; Westermarck, Jukka; Imanishi, Susumu Y
2015-08-17
Hyperactivated RAS drives progression of many human malignancies. However, oncogenic activity of RAS is dependent on simultaneous inactivation of protein phosphatase 2A (PP2A) activity. Although PP2A is known to regulate some of the RAS effector pathways, it has not been systematically assessed how these proteins functionally interact. Here we have analyzed phosphoproteomes regulated by either RAS or PP2A, by phosphopeptide enrichment followed by mass-spectrometry-based label-free quantification. To allow data normalization in situations where depletion of RAS or PP2A inhibitor CIP2A causes a large uni-directional change in the phosphopeptide abundance, we developed a novel normalization strategy, named pairwise normalization. This normalization is based on adjusting phosphopeptide abundances measured before and after the enrichment. The superior performance of the pairwise normalization was verified by various independent methods. Additionally, we demonstrate how the selected normalization method influences the downstream analyses and interpretation of pathway activities. Consequently, bioinformatics analysis of RAS and CIP2A regulated phosphoproteomes revealed a significant overlap in their functional pathways. This is most likely biologically meaningful as we observed a synergistic survival effect between CIP2A and RAS expression as well as KRAS activating mutations in TCGA pan-cancer data set, and synergistic relationship between CIP2A and KRAS depletion in colony growth assays.
De Bernardis, F; Vavagiakis, E M; Niemack, M D; Battaglia, N; Beall, J; Becker, D T; Bond, J R; Calabrese, E; Cho, H; Coughlin, K; Datta, R; Devlin, M; Dunkley, J; Dunner, R; Ferraro, S; Fox, A; Gallardo, P A; Halpern, M; Hand, N; Hasselfield, M; Henderson, S W; Hill, J C; Hilton, G C; Hilton, M; Hincks, A D; Hlozek, R; Hubmayr, J; Huffenberger, K; Hughes, J P; Irwin, K D; Koopman, B J; Kosowsky, A; Li, D; Louis, T; Lungu, M; Madhavacheril, M S; Maurin, L; McMahon, J; Moodley, K; Naess, S; Nati, F; Newburgh, L; Nibarger, J P; Page, L A; Partridge, B; Schaan, E; Schmitt, B L; Sehgal, N; Sievers, J; Simon, S M; Spergel, D N; Staggs, S T; Stevens, J R; Thornton, R J; van Engelen, A; Van Lanen, J; Wollack, E J
2016-01-01
We present a new measurement of the kinematic Sunyaev-Zeldovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-...
3D-4D Interlinkage Of qqq Wave Functions Under 3D Support For Pairwise Bethe-Salpeter Kernels
Mitra, A N
1998-01-01
Using the method of Green's functions within a Bethe-Salpeter framework characterized by a pairwise qq interaction with a Lorentz-covariant 3D support to its kernel, the 4D BS wave function for a system of 3 identical relativistic spinless quarks is reconstructed from the corresponding 3D form which satisfies a fully connected 3D BSE. This result is a 3-body generalization of a similar 2-body result found earlier under identical conditions of a 3D support to the corresponding qq-bar BS kernel under Covariant Instaneity (CIA for short). (The generalization from spinless to fermion quarks is straightforward). To set the CIA with 3D BS kernel support ansatz in the context of contemporary approaches to the qqq baryon problem, a model scalar 4D qqq BSE with pairwise contact interactions to simulate the NJL-Faddeev equations is worked out fully, and a comparison of both vertex functions shows that the CIA vertex reduces exactly to the NJL form in the limit of zero spatial range. This consistency check on the CIA ve...
Suto, Y; Suto, Yasushi; Jing, Yi-Peng
1996-01-01
We discuss the effect of the finite size of galaxies on estimating small-scale relative pairwise peculiar velocity dispersions from the cosmic virial theorem (CVT). Specifically we evaluate the effect by incorporating the finite core radius $r_c$ in the two-point correlation function of mass, i.e. softening $r_s$ on small scales. We analytically obtain the lowest-order correction term for $\\gamma 2$. Compared with the idealistic point-mass approximation ($r_s=r_c=0$), the finite size effect can significantly reduce the small-scale velocity dispersions of galaxies at scales much larger than $r_s$ and $r_c$. Even without considering the finite size of galaxies, nonzero values for $r_c$ are generally expected, for instance, for cold dark matter (CDM) models with a scale-invariant primordial spectrum. For these CDM models, a reasonable force softening $r_s\\le 100 \\hikpc$ would have rather tiny effect. We present the CVT predictions for the small-scale pairwise velocity dispersion in the CDM models normalized by t...
Bond, Stephen D.
2014-01-01
The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods whos cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Leibfarth, Sara; Moennich, David; Thorwarth, Daniela [University Hospital Tuebingen, Section for Biomedical Physics, Department of Radiation Oncology, Tuebingen (Germany); Simoncic, Urban [University Hospital Tuebingen, Section for Biomedical Physics, Department of Radiation Oncology, Tuebingen (Germany); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (Slovenia); Jozef Stefan Institute, Ljubljana (Slovenia); Welz, Stefan; Zips, Daniel [University Hospital Tuebingen, Department of Radiation Oncology, Tuebingen (Germany); Schmidt, Holger; Schwenzer, Nina [University Hospital Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany)
2016-07-15
The aim of this pilot study was to explore simultaneous functional PET/MR for biological characterization of tumors and potential future treatment adaptations. To investigate the extent of complementarity between different PET/MR-based functional datasets, a pairwise correlation analysis was performed. Functional datasets of N=15 head and neck (HN) cancer patients were evaluated. For patients of group A (N=7), combined PET/MR datasets including FDG-PET and ADC maps were available. Patients of group B (N=8) had FMISO-PET, DCE-MRI and ADC maps from combined PET/MRI, an additional dynamic FMISO-PET/CT acquired directly after FMISO tracer injection as well as an FDG-PET/CT acquired a few days earlier. From DCE-MR, parameter maps K{sup trans}, v{sub e} and v{sub p} were obtained with the extended Tofts model. Moreover, parameter maps of mean DCE enhancement, ΔS{sub DCE}, and mean FMISO signal 0-4 min p.i., anti A{sub FMISO}, were derived. Pairwise correlations were quantified using the Spearman correlation coefficient (r) on both a voxel and a regional level within the gross tumor volume. Between some pairs of functional imaging modalities moderate correlations were observed with respect to the median over all patient datasets, whereas distinct correlations were only present on an individual basis. Highest inter-modality median correlations on the voxel level were obtained for FDG/FMISO (r = 0.56), FDG/ anti A{sub FMISO} (r = 0.55), anti A{sub FMISO}/ΔS{sub DCE} (r = 0.46), and FDG/ADC (r = -0.39). Correlations on the regional level showed comparable results. The results of this study suggest that the examined functional datasets provide complementary information. However, only pairwise correlations were examined, and correlations could still exist between combinations of three or more datasets. These results might contribute to the future design of individually adapted treatment approaches based on multiparametric functional imaging.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
R3D Align: global pairwise alignment of RNA 3D structures using local superpositions
Rahrig, Ryan R.; Leontis, Neocles B.; Zirbel, Craig L.
2010-01-01
Motivation: Comparing 3D structures of homologous RNA molecules yields information about sequence and structural variability. To compare large RNA 3D structures, accurate automatic comparison tools are needed. In this article, we introduce a new algorithm and web server to align large homologous RNA structures nucleotide by nucleotide using local superpositions that accommodate the flexibility of RNA molecules. Local alignments are merged to form a global alignment by employing a maximum clique algorithm on a specially defined graph that we call the ‘local alignment’ graph. Results: The algorithm is implemented in a program suite and web server called ‘R3D Align’. The R3D Align alignment of homologous 3D structures of 5S, 16S and 23S rRNA was compared to a high-quality hand alignment. A full comparison of the 16S alignment with the other state-of-the-art methods is also provided. The R3D Align program suite includes new diagnostic tools for the structural evaluation of RNA alignments. The R3D Align alignments were compared to those produced by other programs and were found to be the most accurate, in comparison with a high quality hand-crafted alignment and in conjunction with a series of other diagnostics presented. The number of aligned base pairs as well as measures of geometric similarity are used to evaluate the accuracy of the alignments. Availability: R3D Align is freely available through a web server http://rna.bgsu.edu/R3DAlign. The MATLAB source code of the program suite is also freely available for download at that location. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: r-rahrig@onu.edu PMID:20929913
Jiaxu Chen
2013-01-01
Full Text Available Human mobility modeling has increasingly drawn the attention of researchers working on wireless mobile networks such as delay tolerant networks (DTNs in the last few years. So far, a number of human mobility models have been proposed to reproduce people’s social relationships, which strongly affect people’s daily life movement behaviors. However, most of them are based on the granularity of community. This paper presents interest-oriented human contacts (IHC mobility model, which can reproduce social relationships on a pairwise granularity. As well, IHC provides two methods to generate input parameters (interest vectors based on the social interaction matrix of target scenarios. By comparing synthetic data generated by IHC with three different real traces, we validate our model as a good approximation for human mobility. Exhaustive experiments are also conducted to show that IHC can predict well the performance of routing protocols.
Abildtrup, Jens; Audsley, E.; Fekete-Farkas, M.;
2006-01-01
-economic scenarios that are consistent with climate change scenarios used in climate impact studies. Furthermore, the pairwise comparison approach developed by Saaty [Saaty, T.L., 1980. The Analytic Hierarchy Process. McGraw Hill, New York] provides a useful tool for the quantification from narrative storylines......Assessment of the vulnerability of agriculture to climate change is strongly dependent on concurrent changes in socio-economic development pathways. This paper presents an integrated approach to the construction of socio-economic scenarios required for the analysis of climate change impacts...... on European agricultural land use. The scenarios are interpreted from the storylines described in the intergovernmental panel on climate change (IPCC) special report on emission scenarios (SRES), which ensures internal consistency between the evolution of socio-economics and climate change. A stepwise...
Kato Mikio
2003-01-01
Full Text Available Satellite DNA sequences are known to be highly variable and to have been subjected to concerted evolution that homogenizes member sequences within species. We have analyzed the mode of evolution of satellite DNA sequences in four fishes from the genus Diplodus by calculating the nucleotide frequency of the sequence array and the phylogenetic distances between member sequences. Calculation of nucleotide frequency and pairwise sequence comparison enabled us to characterize the divergence among member sequences in this satellite DNA family. The results suggest that the evolutionary rate of satellite DNA in D. bellottii is about two-fold greater than the average of the other three fishes, and that the sequence homogenization event occurred in D. puntazzo more recently than in the others. The procedures described here are effective to characterize mode of evolution of satellite DNA.
Raikov, A. A.; Orlov, V. V.; Gerasim, R. V.
2014-06-01
A pairwise distance technique developed by the authors is used to identify signs of fractal structure in sets of extragalactic supernovae (822 type Ia supernovae in the regions 300° ≤ α ≤ 360° and 0° ≤ α ≤ 60° , - 5° ≤ δ ≤ 5°). Since the region of space occupied by the objects in the sample is highly oblate, we use Mandelbrot's codimensionality theorem. Three cosmological models are examined: a model with a euclidean metric, a "tired light" model, and the standard ΛCDM model. Estimates of a fractal dimensionality of D ≅ 2.69 are obtained for the first two models and D ≅ 2.64 for the ΛCDM model.
Muh, Hon Cheng; Tong, Joo Chuan; Tammi, Martti T
2009-06-10
Allergy is a major health problem in industrialized countries. The number of transgenic food crops is growing rapidly creating the need for allergenicity assessment before they are introduced into human food chain. While existing bioinformatic methods have achieved good accuracies for highly conserved sequences, the discrimination of allergens and non-allergens from allergen-like non-allergen sequences remains difficult. We describe AllerHunter, a web-based computational system for the assessment of potential allergenicity and allergic cross-reactivity in proteins. It combines an iterative pairwise sequence similarity encoding scheme with SVM as the discriminating engine. The pairwise vectorization framework allows the system to model essential features in allergens that are involved in cross-reactivity, but not limited to distinct sets of physicochemical properties. The system was rigorously trained and tested using 1,356 known allergen and 13,449 putative non-allergen sequences. Extensive testing was performed for validation of the prediction models. The system is effective for distinguishing allergens and non-allergens from allergen-like non-allergen sequences. Testing results showed that AllerHunter, with a sensitivity of 83.4% and specificity of 96.4% (accuracy = 95.3%, area under the receiver operating characteristic curve AROC = 0.928+/-0.004 and Matthew's correlation coefficient MCC = 0.738), performs significantly better than a number of existing methods using an independent dataset of 1443 protein sequences. AllerHunter is available at (http://tiger.dbs.nus.edu.sg/AllerHunter).
Hon Cheng Muh
Full Text Available Allergy is a major health problem in industrialized countries. The number of transgenic food crops is growing rapidly creating the need for allergenicity assessment before they are introduced into human food chain. While existing bioinformatic methods have achieved good accuracies for highly conserved sequences, the discrimination of allergens and non-allergens from allergen-like non-allergen sequences remains difficult. We describe AllerHunter, a web-based computational system for the assessment of potential allergenicity and allergic cross-reactivity in proteins. It combines an iterative pairwise sequence similarity encoding scheme with SVM as the discriminating engine. The pairwise vectorization framework allows the system to model essential features in allergens that are involved in cross-reactivity, but not limited to distinct sets of physicochemical properties. The system was rigorously trained and tested using 1,356 known allergen and 13,449 putative non-allergen sequences. Extensive testing was performed for validation of the prediction models. The system is effective for distinguishing allergens and non-allergens from allergen-like non-allergen sequences. Testing results showed that AllerHunter, with a sensitivity of 83.4% and specificity of 96.4% (accuracy = 95.3%, area under the receiver operating characteristic curve AROC = 0.928+/-0.004 and Matthew's correlation coefficient MCC = 0.738, performs significantly better than a number of existing methods using an independent dataset of 1443 protein sequences. AllerHunter is available at (http://tiger.dbs.nus.edu.sg/AllerHunter.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Farajzadeh, Leila; Hornshøj, Henrik; Momeni, Jamal; Thomsen, Bo; Larsen, Knud; Hedegaard, Jakob; Bendixen, Christian; Madsen, Lone Bruhn, E-mail: LoneB.Madsen@agrsci.dk
2013-08-23
Highlights: •Transcriptome sequencing yielded 223 mill porcine RNA-seq reads, and 59,000 transcribed locations. •Establishment of unique transcription profiles for ten porcine tissues including four brain tissues. •Comparison of transcription profiles at gene, isoform, promoter and transcription start site level. •Highlights a high level of regulation of neuro-related genes at both gene, isoform, and TSS level. •Our results emphasize the pig as a valuable animal model with respect to human biological issues. -- Abstract: The transcriptome is the absolute set of transcripts in a tissue or cell at the time of sampling. In this study RNA-Seq is employed to enable the differential analysis of the transcriptome profile for ten porcine tissues in order to evaluate differences between the tissues at the gene and isoform expression level, together with an analysis of variation in transcription start sites, promoter usage, and splicing. Totally, 223 million RNA fragments were sequenced leading to the identification of 59,930 transcribed gene locations and 290,936 transcript variants using Cufflinks with similarity to approximately 13,899 annotated human genes. Pairwise analysis of tissues for differential expression at the gene level showed that the smallest differences were between tissues originating from the porcine brain. Interestingly, the relative level of differential expression at the isoform level did generally not vary between tissue contrasts. Furthermore, analysis of differential promoter usage between tissues, revealed a proportionally higher variation between cerebellum (CBE) versus frontal cortex and cerebellum versus hypothalamus (HYP) than in the remaining comparisons. In addition, the comparison of differential transcription start sites showed that the number of these sites is generally increased in comparisons including hypothalamus in contrast to other pairwise assessments. A comprehensive analysis of one of the tissue contrasts, i
Kernel principal component and maximum autocorrelation factor analyses for change detection
Nielsen, Allan Aasbjerg; Canty, Morton John
2009-01-01
in Nevada acquired on successive passes of the Landsat-5 satellite in August-September 1991. The six-band images (the thermal band is omitted) with 1,000 by 1,000 28.5 m pixels were first processed with the iteratively re-weighted MAD (IR-MAD) algorithm in order to discriminate change. Then the MAD image......Principal component analysis (PCA) has often been used to detect change over time in remotely sensed images. A commonly used technique consists of finding the projections along the eigenvectors for data consisting of pair-wise (perhaps generalized) differences between corresponding spectral bands...... covering the same geographical region acquired at two different time points. In this paper kernel versions of the principal component and maximum autocorrelation factor (MAF) transformations are used to carry out the analysis. An example is based on bi-temporal Landsat-5 TM imagery over irrigation fields...
Reinisch, Elena C.; Cardiff, Michael; Feigl, Kurt L.
2016-07-01
Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is (6.2 ± 0.6) × 10^6~m^3/year . Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.
Reinisch, Elena C.; Cardiff, Michael; Feigl, Kurt L.
2017-01-01
Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is (6.2 ± 0.6) × 10^6 m^3/year . Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.
Feature selection for multi-class problems by using pairwise-class and all-class techniques
You, Mingyu; Li, Guo-Zheng
2011-05-01
Feature selection has been a key technology in massive data processing, e.g. in microarray data analysis with few samples but high-dimensional genes. One common problem in multi-class microarray data analysis is the unbalanced recognition or prediction accuracies among classes, which usually leads to poor system performance. One of the main reasons is the unfair feature (gene) selection method. In this paper, a novel feature selection framework by using pairwise-class and all-class techniques (namely FrPA) is proposed to balance the performance among classes and improve the average accuracy. The feature (gene) rank list on all classes and the lists on each pair of classes are all taken into consideration during feature selection. The strategy of round-robin is embedded into the framework to select final features from the different rank lists. Experimental results on several microarray data sets show that FrPA helps to achieve higher classification accuracy and balance the performance among classes.
Makuch, Karol; Heinen, Marco; Abade, Gustavo Coelho; Nägele, Gerhard
To the present day, the Beenakker-Mazur (BM) method is the most comprehensive statistical physics approach to the calculation of short-time transport properties of colloidal suspensions. A revised version of the BM method with an improved treatment of hydrodynamic interactions is presented and evaluated regarding the rotational short-time self-diffusion coefficient, $D^r$ , of suspensions of charged particles interacting by a hard-sphere plus screened Coulomb (Yukawa) pair potential. To assess the accuracy of the method, elaborate simulations of $D^r$ have been performed, covering a broad range of interaction parameters and particle concentrations. The revised BM method is compared in addition with results by a simplifying pairwise additivity (PA) method in which the hydrodynamic interactions are treated on a two-body level. The static pair correlation functions re- quired as input to both theoretical methods are calculated using the Rogers-Young integral equation scheme. While the revised BM method reproduces the general trends of the simulation results, it systematically and significantly underestimates the rotational diffusion coefficient. The PA method agrees well with the simulation data at lower volume fractions, but at higher concentrations $D^r$ is likewise underestimated. For a fixed value of the pair potential at mean particle distance comparable to the thermal energy, $D^r$ increases strongly with increasing Yukawa potential screening parameter.
Li, Jicun; Wang, Feng
2017-02-01
A pairwise additive atomistic potential was developed for modeling liquid water on graphene. The graphene-water interaction terms were fit to map the PAW-PBE-D3 potential energy surface using the adaptive force matching method. Through condensed phase force matching, the potential developed implicitly considers the many-body effects of water. With this potential, the graphene-water contact angle was determined to be 86° in good agreement with a recent experimental measurement of 85° ± 5° on fully suspended graphene. Furthermore, the PAW-PBE-D3 based model was used to study contact line hysteresis. It was found that the advancing and receding contact angles of water do agree on pristine graphene, however a long simulation time was required to reach the equilibrium contact angle. For water on suspended graphene, sharp peaks in the water density profile disappear when the flexibility of graphene was explicitly considered. The water droplet induces graphene to wrap around it leading to a slightly concave contact interface.
Nadine Steckling
2017-01-01
Full Text Available In artisanal small-scale gold mining, mercury is used for gold-extraction, putting miners and nearby residents at risk of chronic metallic mercury vapor intoxication (CMMVI. Burden of disease (BoD analyses allow the estimation of the public health relevance of CMMVI, but until now there have been no specific CMMVI disability weights (DWs. The objective is to derive DWs for moderate and severe CMMVI. Disease-specific and generic health state descriptions of 18 diseases were used in a pairwise comparison survey. Mercury and BoD experts were invited to participate in an online survey. Data were analyzed using probit regression. Local regression was used to make the DWs comparable to the Global Burden of Disease (GBD study. Alternative survey (visual analogue scale and data analyses approaches (linear interpolation were evaluated in scenario analyses. A total of 105 participants completed the questionnaire. DWs for moderate and severe CMMVI were 0.368 (0.261–0.484 and 0.588 (0.193–0.907, respectively. Scenario analyses resulted in higher mean values. The results are limited by the sample size, group of interviewees, questionnaire extent, and lack of generally accepted health state descriptions. DWs were derived to improve the data basis of mercury-related BoD estimates, providing useful information for policy-making. Integration of the results into the GBD DWs enhances comparability.
Xin Yi Ng
2015-01-01
Full Text Available This study concerns an attempt to establish a new method for predicting antimicrobial peptides (AMPs which are important to the immune system. Recently, researchers are interested in designing alternative drugs based on AMPs because they have found that a large number of bacterial strains have become resistant to available antibiotics. However, researchers have encountered obstacles in the AMPs designing process as experiments to extract AMPs from protein sequences are costly and require a long set-up time. Therefore, a computational tool for AMPs prediction is needed to resolve this problem. In this study, an integrated algorithm is newly introduced to predict AMPs by integrating sequence alignment and support vector machine- (SVM- LZ complexity pairwise algorithm. It was observed that, when all sequences in the training set are used, the sensitivity of the proposed algorithm is 95.28% in jackknife test and 87.59% in independent test, while the sensitivity obtained for jackknife test and independent test is 88.74% and 78.70%, respectively, when only the sequences that has less than 70% similarity are used. Applying the proposed algorithm may allow researchers to effectively predict AMPs from unknown protein peptide sequences with higher sensitivity.
The atom-surface interaction potential for He-NaCl: A model based on pairwise additivity
Hutson, Jeremy M.; Fowler, P. W.
1986-08-01
The recently developed semi-empirical model of Fowler and Hutson is applied to the He-NaCl atom-surface interaction potential. Ab initio self-consistent field calculations of the repulsive interactions between He atoms and in-crystal Cl - and Na + ions are performed. Dispersion coefficients involving in-crystal ions are also calculated. The atom-surface potential is constructed using a model based on pairwise additivity of atom-ion forces. With a small adjustment of the repulsive part, this potential gives good agreement with the experimental bound state energies obtained from selective adsorption resonances in low-energy atom scattering experiments. Close-coupling calculations of the resonant scattering are performed, and good agreement with the experimental peak positions and intensity patterns is obtained. It is concluded that there are no bound states deeper than those observed in the selective adsorption experiments, and that the well depth of the He-NaCl potential is 6.0 ± 0.2 meV.
Steckling, Nadine; Devleesschauwer, Brecht; Winkelnkemper, Julia; Fischer, Florian; Ericson, Bret; Krämer, Alexander; Hornberg, Claudia; Fuller, Richard; Plass, Dietrich; Bose-O'Reilly, Stephan
2017-01-10
In artisanal small-scale gold mining, mercury is used for gold-extraction, putting miners and nearby residents at risk of chronic metallic mercury vapor intoxication (CMMVI). Burden of disease (BoD) analyses allow the estimation of the public health relevance of CMMVI, but until now there have been no specific CMMVI disability weights (DWs). The objective is to derive DWs for moderate and severe CMMVI. Disease-specific and generic health state descriptions of 18 diseases were used in a pairwise comparison survey. Mercury and BoD experts were invited to participate in an online survey. Data were analyzed using probit regression. Local regression was used to make the DWs comparable to the Global Burden of Disease (GBD) study. Alternative survey (visual analogue scale) and data analyses approaches (linear interpolation) were evaluated in scenario analyses. A total of 105 participants completed the questionnaire. DWs for moderate and severe CMMVI were 0.368 (0.261-0.484) and 0.588 (0.193-0.907), respectively. Scenario analyses resulted in higher mean values. The results are limited by the sample size, group of interviewees, questionnaire extent, and lack of generally accepted health state descriptions. DWs were derived to improve the data basis of mercury-related BoD estimates, providing useful information for policy-making. Integration of the results into the GBD DWs enhances comparability.
Pfeiffenberger, Erik; Chaleil, Raphael A G; Moal, Iain H; Bates, Paul A
2017-03-01
Reliable identification of near-native poses of docked protein-protein complexes is still an unsolved problem. The intrinsic heterogeneity of protein-protein interactions is challenging for traditional biophysical or knowledge based potentials and the identification of many false positive binding sites is not unusual. Often, ranking protocols are based on initial clustering of docked poses followed by the application of an energy function to rank each cluster according to its lowest energy member. Here, we present an approach of cluster ranking based not only on one molecular descriptor (e.g., an energy function) but also employing a large number of descriptors that are integrated in a machine learning model, whereby, an extremely randomized tree classifier based on 109 molecular descriptors is trained. The protocol is based on first locally enriching clusters with additional poses, the clusters are then characterized using features describing the distribution of molecular descriptors within the cluster, which are combined into a pairwise cluster comparison model to discriminate near-native from incorrect clusters. The results show that our approach is able to identify clusters containing near-native protein-protein complexes. In addition, we present an analysis of the descriptors with respect to their power to discriminate near native from incorrect clusters and how data transformations and recursive feature elimination can improve the ranking performance. Proteins 2017; 85:528-543. © 2016 Wiley Periodicals, Inc.
Steckling, Nadine; Devleesschauwer, Brecht; Winkelnkemper, Julia; Fischer, Florian; Ericson, Bret; Krämer, Alexander; Hornberg, Claudia; Fuller, Richard; Plass, Dietrich; Bose-O’Reilly, Stephan
2017-01-01
In artisanal small-scale gold mining, mercury is used for gold-extraction, putting miners and nearby residents at risk of chronic metallic mercury vapor intoxication (CMMVI). Burden of disease (BoD) analyses allow the estimation of the public health relevance of CMMVI, but until now there have been no specific CMMVI disability weights (DWs). The objective is to derive DWs for moderate and severe CMMVI. Disease-specific and generic health state descriptions of 18 diseases were used in a pairwise comparison survey. Mercury and BoD experts were invited to participate in an online survey. Data were analyzed using probit regression. Local regression was used to make the DWs comparable to the Global Burden of Disease (GBD) study. Alternative survey (visual analogue scale) and data analyses approaches (linear interpolation) were evaluated in scenario analyses. A total of 105 participants completed the questionnaire. DWs for moderate and severe CMMVI were 0.368 (0.261–0.484) and 0.588 (0.193–0.907), respectively. Scenario analyses resulted in higher mean values. The results are limited by the sample size, group of interviewees, questionnaire extent, and lack of generally accepted health state descriptions. DWs were derived to improve the data basis of mercury-related BoD estimates, providing useful information for policy-making. Integration of the results into the GBD DWs enhances comparability. PMID:28075395
Ramli, Rohaini; Kasim, Maznah Mat; Ramli, Razamin; Kayat, Kalsom; Razak, Rafidah Abd
2014-12-01
Ministry of Tourism and Culture Malaysia has long introduced homestay programs across the country to enhance the quality of life of people, especially those living in rural areas. This type of program is classified as a community-based tourism (CBT) as it is expected to economically improve livelihood through cultural and community associated activities. It is the aspiration of the ministry to see that the income imbalance between people in the rural and urban areas is reduced, thus would contribute towards creating more developed states of Malaysia. Since 1970s, there are 154 homestay programs registered with the ministry. However, the performance and sustainability of the programs are still not satisfying. There are only a number of homestay programs that perform well and able to sustain. Thus, the aim of this paper is to identify relevant criteria contributing to the sustainability of a homestay program. The criteria are evaluated for their levels of importance via the use of a modified pairwise method and analyzed for other potentials. The findings will help the homestay operators to focus on the necessary criteria and thus, effectively perform as the CBT business initiative.
Hongyan Zhang
2014-01-01
Full Text Available In efforts to discover disease mechanisms and improve clinical diagnosis of tumors, it is useful to mine profiles for informative genes with definite biological meanings and to build robust classifiers with high precision. In this study, we developed a new method for tumor-gene selection, the Chi-square test-based integrated rank gene and direct classifier (χ2-IRG-DC. First, we obtained the weighted integrated rank of gene importance from chi-square tests of single and pairwise gene interactions. Then, we sequentially introduced the ranked genes and removed redundant genes by using leave-one-out cross-validation of the chi-square test-based Direct Classifier (χ2-DC within the training set to obtain informative genes. Finally, we determined the accuracy of independent test data by utilizing the genes obtained above with χ2-DC. Furthermore, we analyzed the robustness of χ2-IRG-DC by comparing the generalization performance of different models, the efficiency of different feature-selection methods, and the accuracy of different classifiers. An independent test of ten multiclass tumor gene-expression datasets showed that χ2-IRG-DC could efficiently control overfitting and had higher generalization performance. The informative genes selected by χ2-IRG-DC could dramatically improve the independent test precision of other classifiers; meanwhile, the informative genes selected by other feature selection methods also had good performance in χ2-DC.
Fačkovec, Boris; Vondrášek, Jiří
2012-10-25
Although a contact is an essential measurement for the topology as well as strength of non-covalent interactions in biomolecules and their complexes, there is no general agreement in the definition of this feature. Most of the definitions work with simple geometric criteria which do not fully reflect the energy content or ability of the biomolecular building blocks to arrange their environment. We offer a reasonable solution to this problem by distinguishing between "productive" and "non-productive" contacts based on their interaction energy strength and properties. We have proposed a method which converts the protein topology into a contact map that represents interactions with statistically significant high interaction energies. We do not prove that these contacts are exclusively stabilizing, but they represent a gateway to thermodynamically important rather than geometry-based contacts. The process is based on protein fragmentation and calculation of interaction energies using the OPLS force field and relies on pairwise additivity of amino acid interactions. Our approach integrates the treatment of different types of interactions, avoiding the problems resulting from different contributions to the overall stability and the different effect of the environment. The first applications on a set of homologous proteins have shown the usefulness of this classification for a sound estimate of protein stability.
Daud, Shahidah Md; Ramli, Razamin; Kasim, Maznah Mat; Kayat, Kalsom; Razak, Rafidah Abd
2014-12-01
Tourism industry has become the highlighted sector which has amazingly increased the national income level. Despite the tourism industry being one of the highest income generating sectors, Homestay Programme as a Community-Based Tourism (CBT) product in Malaysia does not absorbed much of the incoming wealth. Homestay Programme refers to a programme in a community where a tourist stays together with a host family and experiences the everyday way of life of the family in both direct and indirect manner. There are over 100 Homestay Programme currently being registered with the Ministry of Culture and Tourism Malaysia which mostly are located in rural areas, but only a few excel and enjoying the fruit of the booming industry. Hence, this article seeks to identify the critical success factors for a Community-Based Rural Homestay Programme in Malaysia. A modified pairwise method is utilized to further evaluate the identified success factors in a more meaningful way. The findings will help Homestay Programme function as a community development tool that manages tourism resources. Thus, help the community in improving local economy and creating job opportunities.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Makuch, Karol; Heinen, Marco; Abade, Gustavo Coelho; Nägele, Gerhard
2015-07-14
We present a comprehensive joint theory-simulation study of rotational self-diffusion in suspensions of charged particles whose interactions are modeled by the generic hard-sphere plus repulsive Yukawa (HSY) pair potential. Elaborate, high-precision simulation results for the short-time rotational self-diffusion coefficient, D(r), are discussed covering a broad range of fluid-phase state points in the HSY model phase diagram. The salient trends in the behavior of D(r) as a function of reduced potential strength and range, and particle concentration, are systematically explored and physically explained. The simulation results are further used to assess the performance of two semi-analytic theoretical methods for calculating D(r). The first theoretical method is a revised version of the classical Beenakker-Mazur method (BM) adapted to rotational diffusion which includes a highly improved treatment of the salient many-particle hydrodynamic interactions. The second method is an easy-to-implement pairwise additivity (PA) method in which the hydrodynamic interactions are treated on a full two-body level with lubrication corrections included. The static pair correlation functions required as the only input to both theoretical methods are calculated using the accurate Rogers-Young integral equation scheme. While the revised BM method reproduces the general trends of the simulation results, it significantly underestimates D(r). In contrast, the PA method agrees well with the simulation results for D(r) even for intermediately concentrated systems. A simple improvement of the PA method is presented which is applicable for large concentrations.
Bøtker, Johan P; Karmwar, Pranav; Strachan, Clare J; Cornett, Claus; Tian, Fang; Zujovic, Zoran; Rantanen, Jukka; Rades, Thomas
2011-09-30
The aim of this study was to investigate the usefulness of the atomic pair-wise distribution function (PDF) to detect the extension of disorder/amorphousness induced into a crystalline drug using a cryo-milling technique, and to determine the optimal milling times to achieve amorphisation. The PDF analysis was performed on samples of indomethacin obtained by cryogenic ball milling (cryo-milling) for different periods of time. X-ray powder diffraction (XRPD), differential scanning calorimetry (DSC), polarised light microscopy (PLM) and solid state nuclear magnetic resonances (ss-NMR) were also used to analyse the cryo-milled samples. The high similarity between the γ-indomethacin cryogenic ball milled samples and the crude γ-indomethacin indicated that milled samples retained residual order of the γ-form. The PDF analysis encompassed the capability of achieving a correlation with the physical properties determined from DSC, ss-NMR and stability experiments. Multivariate data analysis (MVDA) was used to visualize the differences in the PDF and XRPD data. The MVDA approach revealed that PDF is more efficient in assessing the introduced degree of disorder in γ-indomethacin after cryo-milling than MVDA of the corresponding XRPD diffractograms. The PDF analysis was able to determine the optimal cryo-milling time that facilitated the highest degree of disorder in the samples. Therefore, it is concluded that the PDF technique may be used as a complementary tool to other solid state methods and that further investigations are warranted to elucidate the capabilities of this technique.
de Lara-Castells, María Pilar; Fernández-Perea, Ricardo; Madzharova, Fani; Voloshina, Elena
2016-06-01
The adsorption of noble gases on metallic surfaces represents a paradigmatic case of van-der-Waals (vdW) interaction due to the role of screening effects on the corrugation of the interaction potential [J. L. F. Da Silva et al., Phys. Rev. Lett. 90, 066104 (2003)]. The extremely small adsorption energy of He atoms on the Mg(0001) surface (below 3 meV) and the delocalized nature and mobility of the surface electrons make the He/Mg(0001) system particularly challenging, even for state-of-the-art vdW-corrected density functional-based (vdW-DFT) approaches [M. P. de Lara-Castells et al., J. Chem. Phys. 143, 194701 (2015)]. In this work, we meet this challenge by applying two different procedures. First, the dispersion-corrected second-order Möller-Plesset perturbation theory (MP2C) approach is adopted, using bare metal clusters of increasing size. Second, the method of increments [H. Stoll, J. Chem. Phys. 97, 8449 (1992)] is applied at coupled cluster singles and doubles and perturbative triples level, using embedded cluster models of the metal surface. Both approaches provide clear evidences of the anti-corrugation of the interaction potential: the He atom prefers on-top sites, instead of the expected hollow sites. This is interpreted as a signature of the screening of the He atom by the metal for the on-top configuration. The strong screening in the metal is clearly reflected in the relative contribution of successively deeper surface layers to the main dispersion contribution. Aimed to assist future dynamical simulations, a pairwise potential model for the He/surface interaction as a sum of effective He-Mg pair potentials is also presented, as an improvement of the approximation using isolated He-Mg pairs.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Lewis, Mark L; Cucurull-Sanchez, Lourdes
2009-02-01
Data mining by pairwise comparison of over 150,000 human liver microsome (HLM) intrinsic clearance values stored within the internal Pfizer database has been performed by an automated tool. Systematic probability tables of specific structural changes on the intrinsic clearance of phenyl derivatives have been generated. From these data two new parameters, the Pfizer Metabolism Index (PMI) and Metabolism-Lipophilicity Efficiency (MLE) are introduced for each fragment. The findings are applied to a Topliss style analysis that focuses on metabolic stability.
吴永锋
2015-01-01
本文研究了两两NQD随机变量的Marcinkiewicz-Zygmund不等式及其应用的问题。利用截尾的方法，获得了两两NQD随机变量的p阶(1≤p<2)Marcinkiewicz-Zygmund不等式结果。作为应用，获得了两两NQD随机变量的两个Lr收敛性结果的简单证明，改进了陈平炎[10]和Sung [20]的相应工作。%In this paper, the author studies the Marcinkiewicz-Zygmund inequality for pair-wise negative quadrant dependent (NQD) random variables and it applications. By using the truncated method, the author obtains the Marcinkiewicz-Zygmund inequality with exponent p (1≤p<2) for pairwise NQD random variables. As applications, the author obtains the simpler proofs of two Lr convergence results for pairwise NQD random variables, which improve the corre-sponding work by Chen [10] and Sung [20] respectively.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Pairwise and higher-order correlations among drug-resistance mutations in HIV-1 subtype B protease
Morozov Alexandre V
2009-08-01
individually smaller but may have a collective effect. Together they lead to correlations which could have an important impact on the dynamics of the evolution of cross-resistance, by allowing the virus to pass through otherwise unlikely mutational states. These findings also indicate that pairwise and possibly higher-order effects should be included in the models of protein evolution, instead of assuming that all residues mutate independently of one another.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Maximum entropy reconstructions of dynamic signaling networks from quantitative proteomics data.
Locasale, Jason W; Wolf-Yadlin, Alejandro
2009-08-26
Advances in mass spectrometry among other technologies have allowed for quantitative, reproducible, proteome-wide measurements of levels of phosphorylation as signals propagate through complex networks in response to external stimuli under different conditions. However, computational approaches to infer elements of the signaling network strictly from the quantitative aspects of proteomics data are not well established. We considered a method using the principle of maximum entropy to infer a network of interacting phosphotyrosine sites from pairwise correlations in a mass spectrometry data set and derive a phosphorylation-dependent interaction network solely from quantitative proteomics data. We first investigated the applicability of this approach by using a simulation of a model biochemical signaling network whose dynamics are governed by a large set of coupled differential equations. We found that in a simulated signaling system, the method detects interactions with significant accuracy. We then analyzed a growth factor mediated signaling network in a human mammary epithelial cell line that we inferred from mass spectrometry data and observe a biologically interpretable, small-world structure of signaling nodes, as well as a catalog of predictions regarding the interactions among previously uncharacterized phosphotyrosine sites. For example, the calculation places a recently identified tumor suppressor pathway through ARHGEF7 and Scribble, in the context of growth factor signaling. Our findings suggest that maximum entropy derived network models are an important tool for interpreting quantitative proteomics data.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Datta, Sumona; Shah, Lena; Gilman, Robert H; Evans, Carlton A
2017-08-01
The performance of laboratory tests to diagnose pulmonary tuberculosis is dependent on the quality of the sputum sample tested. The relative merits of sputum collection methods to improve tuberculosis diagnosis are poorly characterised. We therefore aimed to investigate the effects of sputum collection methods on tuberculosis diagnosis. We did a systematic review and meta-analysis to investigate whether non-invasive sputum collection methods in people aged at least 12 years improve the diagnostic performance of laboratory testing for pulmonary tuberculosis. We searched PubMed, Google Scholar, ProQuest, Web of Science, CINAHL, and Embase up to April 14, 2017, to identify relevant experimental, case-control, or cohort studies. We analysed data by pairwise meta-analyses with a random-effects model and by network meta-analysis. All diagnostic performance data were calculated at the sputum-sample level, except where authors only reported data at the individual patient-level. Heterogeneity was assessed, with potential causes identified by logistic meta-regression. We identified 23 eligible studies published between 1959 and 2017, involving 8967 participants who provided 19 252 sputum samples. Brief, on-demand spot sputum collection was the main reference standard. Pooled sputum collection increased tuberculosis diagnosis by microscopy (odds ratio [OR] 1·6, 95% CI 1·3-1·9, p<0·0001) or culture (1·7, 1·2-2·4, p=0·01). Providing instructions to the patient before sputum collection, during observed collection, or together with physiotherapy assistance increased diagnostic performance by microscopy (OR 1·6, 95% CI 1·3-2·0, p<0·0001). Collecting early morning sputum did not significantly increase diagnostic performance of microscopy (OR 1·5, 95% CI 0·9-2·6, p=0·2) or culture (1·4, 0·9-2·4, p=0·2). Network meta-analysis confirmed these findings, and revealed that both pooled and instructed spot sputum collections were similarly effective techniques for
基于布尔逻辑的双序列搜索比对算法%Pairwise Sequences Search and Alignment Algorithm Based on Boolean Logic
郭宁; 冯萍; 康继昌
2011-01-01
Traditional pairwise sequences alignment algorithms are mostly based on dynamic programming, there are some problems when using dynamic programming to align for its slow speed and low accuracy. Pairwise sequences search and alignment algorithm based on Boolean logic is proposed in this paper. The algorithm searches homologous regions in the pairwise sequence using a fixed-length base fragment in one sequence, and performs the alignment between the homologous regions at once, including the alignment of the bases in the homologous regions and the alignment between the subsequence and the other sequence. It also makes use of concurrent execution mechanism to realize the parallel speed up. Simulation experimental results show that the algorithm has higer real-time and accuracy.%传统双序列比对算法使用动态规划进行序列比对的速度慢,且准确性不高.为解决该问题,提出一种基于布尔逻辑的双序列搜索比对算法.根据一条序列中定长的碱基片段搜索2条序列的相似区,对相似区进行比对,包括相似区中碱基的比对以及子序列与另一条序列的比对,并通过并行执行机制实现加速比对.仿真实验结果表明,该算法具有较高的准确性和较好的实时性.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Moreira, X; Pearse, I S
2017-05-01
Plant life-history strategies associated with resource acquisition and economics (e.g. leaf habit) are thought to be fundamental determinants of the traits and mechanisms that drive herbivore pressure, resource allocation to plant defensive traits, and the simultaneous expression (positive correlations) or trade-offs (negative correlations) between these defensive traits. In particular, it is expected that evergreen species - which usually grow slower and support constant herbivore pressure in comparison with deciduous species - will exhibit higher levels of both physical and chemical defences and a higher predisposition to the simultaneous expression of physical and chemical defensive traits. Here, by using a dataset which included 56 oak species (Quercus genus), we investigated whether leaf habit of plant species governs the investment in both physical and chemical defences and pair-wise correlations between these defensive traits. Our results showed that leaf habit does not determine the production of most leaf physical and chemical defences. Although evergreen oak species had higher levels of leaf toughness and specific leaf mass (physical defences) than deciduous oak species, both traits are essentially prerequisites for evergreenness. Similarly, our results also showed that leaf habit does not determine pair-wise correlations between defensive traits because most physical and chemical defensive traits were simultaneously expressed in both evergreen and deciduous oak species. Our findings indicate that leaf habit does not substantially contribute to oak species differences in plant defence investment. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.
Meng-Hua Li
Full Text Available Genome-wide SNP data provide a powerful tool to estimate pairwise relatedness among individuals and individual inbreeding coefficient. The aim of this study was to compare methods for estimating the two parameters in a Finnsheep population based on genome-wide SNPs and genealogies, separately. This study included ninety-nine Finnsheep in Finland that differed in coat colours (white, black, brown, grey, and black/white spotted and were from a large pedigree comprising 319 119 animals. All the individuals were genotyped with the Illumina Ovine SNP50K BeadChip by the International Sheep Genomics Consortium. We identified three genetic subpopulations that corresponded approximately with the coat colours (grey, white, and black and brown of the sheep. We detected a significant subdivision among the colour types (F(ST = 5.4%, P0.0625 and pairs of closely related animals (e.g. the full- or half-sibs. Nevertheless, we also detected differences in the two parameters between the approaches, particularly with respect to the grey Finnsheep. This could be due to the smaller sample size and relative incompleteness of the pedigree for them.We conclude that the genome-wide genomic data will provide useful information on a per sample or pairwise-samples basis in cases of complex genealogies or in the absence of genealogical data.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Weinreich, Daniel M; Knies, Jennifer L
2013-10-01
The functional synthesis uses experimental methods from molecular biology, biochemistry and structural biology to decompose evolutionarily important mutations into their more proximal mechanistic determinants. However these methods are technically challenging and expensive. Noting strong formal parallels between R.A. Fisher's geometric model of adaptation and a recent model for the phenotypic basis of protein evolution, we sought to use the former to make inferences into the latter using data on pairwise fitness epistasis between mutations. We present an analytic framework for classifying pairs of mutations with respect to similarity of underlying mechanism on this basis, and also show that these data can yield an estimate of the number of mutationally labile phenotypes underlying fitness effects. We use computer simulations to explore the robustness of our approach to violations of analytic assumptions and analyze several recently published datasets. This work provides a theoretical complement to the functional synthesis as well as a novel test of Fisher's geometric model.
Deveau, Aurélie; Barret, Matthieu; Diedhiou, Abdala G; Leveau, Johan; de Boer, Wietse; Martin, Francis; Sarniguet, Alain; Frey-Klett, Pascale
2015-01-01
Ectomycorrhizal fungi are surrounded by bacterial communities with which they interact physically and metabolically during their life cycle. These bacteria can have positive or negative effects on the formation and the functioning of ectomycorrhizae. However, relatively little is known about the mechanisms by which ectomycorrhizal fungi and associated bacteria interact. To understand how ectomycorrhizal fungi perceive their biotic environment and the mechanisms supporting interactions between ectomycorrhizal fungi and soil bacteria, we analysed the pairwise transcriptomic responses of the ectomycorrhizal fungus Laccaria bicolor (Basidiomycota: Agaricales) when confronted with beneficial, neutral or detrimental soil bacteria. Comparative analyses of the three transcriptomes indicated that the fungus reacted differently to each bacterial strain. Similarly, each bacterial strain produced a specific and distinct response to the presence of the fungus. Despite these differences in responses observed at the gene level, we found common classes of genes linked to cell-cell interaction, stress response and metabolic processes to be involved in the interaction of the four microorganisms.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Lyamzin, Dmitry R; Macke, Jakob H; Lesica, Nicholas A
2010-01-01
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modeling such data must also be developed. Here, we present a model for the type of data commonly recorded in early sensory pathways: responses to repeated trials of a sensory stimulus in which each neuron has it own time-varying spike rate (as described by its PSTH) and the dependencies between cells are characterized by both signal and noise correlations. This model is an extension of previous attempts to model population spike trains designed to control only the total correlation between cells. In our model, the response of each cell is represented as a binary vector given by the dichotomized sum of a deterministic "signal" that is repeated on each trial and a Gaussian random "noise" that is different on each trial. This model allows the simulation of population spike trains with PSTHs, trial-to-trial variability, and pairwise correlations that match those measured experimentally. Furthermore, the model also allows the noise correlations in the spike trains to be manipulated independently of the signal correlations and single-cell properties. To demonstrate the utility of the model, we use it to simulate and manipulate experimental responses from the mammalian auditory and visual systems. We also present a general form of the model in which both the signal and noise are Gaussian random processes, allowing the mean spike rate, trial-to-trial variability, and pairwise signal and noise correlations to be specified independently. Together, these methods for modeling spike trains comprise a potentially powerful set of tools for both theorists and experimentalists studying population responses in sensory systems.
Aouinti, Safa; Giudicelli, Véronique; Duroux, Patrice; Malouche, Dhafer; Kossida, Sofia; Lefranc, Marie-Paule
2016-01-01
There is a huge need for standardized analysis and statistical procedures in order to compare the complex immune repertoires of antigen receptors immunoglobulins (IG) and T cell receptors (TR) obtained by next generation sequencing (NGS). NGS technologies generate millions of nucleotide sequences and have led to the development of new tools. The IMGT/HighV-QUEST, available since 2010, is the first global web portal for the analysis of IG and TR high throughput sequences. IMGT/HighV-QUEST provides standardized outputs for the characterization of the "IMGT clonotype (AA)" (AA for amino acids) and their comparison in up to one million sequences. Standardized statistical procedures for "IMGT clonotype (AA)" diversity or expression comparisons have recently been described, however, no tool was yet available. IMGT/StatClonotype, a new IMGT(®) tool, evaluates and visualizes statistical significance of pairwise comparisons of IMGT clonotype (AA) diversity or expression, per V (variable), D (diversity), and J (joining) gene of a given IG or TR group, from NGS IMGT/HighV-QUEST statistical output. IMGT/StatClonotype tool is incorporated in the R package "IMGTStatClonotype," with a user-friendly interface. IMGT/StatClonotype is downloadable at IMGT(®) for users to evaluate pairwise comparison of IG and TR NGS statistical output from IMGT/HighV-QUEST and to visualize, on their web browser, the statistical significance of IMGT clonotype (AA) diversity or expression, per gene, the comparative analysis of CDR-IMGT and the V-D-J associations, in immunoprofiles from normal or pathological immune responses.
Aouinti, Safa; Giudicelli, Véronique; Duroux, Patrice; Malouche, Dhafer; Kossida, Sofia; Lefranc, Marie-Paule
2016-01-01
There is a huge need for standardized analysis and statistical procedures in order to compare the complex immune repertoires of antigen receptors immunoglobulins (IG) and T cell receptors (TR) obtained by next generation sequencing (NGS). NGS technologies generate millions of nucleotide sequences and have led to the development of new tools. The IMGT/HighV-QUEST, available since 2010, is the first global web portal for the analysis of IG and TR high throughput sequences. IMGT/HighV-QUEST provides standardized outputs for the characterization of the “IMGT clonotype (AA)” (AA for amino acids) and their comparison in up to one million sequences. Standardized statistical procedures for “IMGT clonotype (AA)” diversity or expression comparisons have recently been described, however, no tool was yet available. IMGT/StatClonotype, a new IMGT® tool, evaluates and visualizes statistical significance of pairwise comparisons of IMGT clonotype (AA) diversity or expression, per V (variable), D (diversity), and J (joining) gene of a given IG or TR group, from NGS IMGT/HighV-QUEST statistical output. IMGT/StatClonotype tool is incorporated in the R package “IMGTStatClonotype,” with a user-friendly interface. IMGT/StatClonotype is downloadable at IMGT®1 for users to evaluate pairwise comparison of IG and TR NGS statistical output from IMGT/HighV-QUEST and to visualize, on their web browser, the statistical significance of IMGT clonotype (AA) diversity or expression, per gene, the comparative analysis of CDR-IMGT and the V–D–J associations, in immunoprofiles from normal or pathological immune responses. PMID:27667992
Li, Meng-Hua; Strandén, Ismo; Tiirikka, Timo; Sevón-Aimonen, Marja-Liisa; Kantanen, Juha
2011-01-01
Genome-wide SNP data provide a powerful tool to estimate pairwise relatedness among individuals and individual inbreeding coefficient. The aim of this study was to compare methods for estimating the two parameters in a Finnsheep population based on genome-wide SNPs and genealogies, separately. This study included ninety-nine Finnsheep in Finland that differed in coat colours (white, black, brown, grey, and black/white spotted) and were from a large pedigree comprising 319 119 animals. All the individuals were genotyped with the Illumina Ovine SNP50K BeadChip by the International Sheep Genomics Consortium. We identified three genetic subpopulations that corresponded approximately with the coat colours (grey, white, and black and brown) of the sheep. We detected a significant subdivision among the colour types (FST = 5.4%, Ppedigrees (FPED and ΦPED) were computed using the RelaX2 program. Values of the two parameters estimated from genomic and genealogical data were mostly consistent, in particular for the highly inbred animals (e.g. inbreeding coefficient F>0.0625) and pairs of closely related animals (e.g. the full- or half-sibs). Nevertheless, we also detected differences in the two parameters between the approaches, particularly with respect to the grey Finnsheep. This could be due to the smaller sample size and relative incompleteness of the pedigree for them. We conclude that the genome-wide genomic data will provide useful information on a per sample or pairwise-samples basis in cases of complex genealogies or in the absence of genealogical data. PMID:22114661
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Describing Adequacy of cure with maximum hardness ratios and non-linear regression.
Bouschlicher, Murray; Berning, Kristen; Qian, Fang
2008-01-01
Knoop Hardness (KH) ratios (HR) > or = 80% are commonly used as criteria for the adequate cure of a composite. These per-specimen HRs can be misleading, as both numerator and denominator may increase concurrently, prior to reaching an asymptotic, top-surface maximum hardness value (H(MAX)). Extended cure times were used to establish H(MAX) and descriptive statistics, and non-linear regression analysis were used to describe the relationship between exposure duration and HR and predict the time required for HR-H(MAX) = 80%. Composite samples 2.00 x 5.00 mm diameter (n = 5/grp) were cured for 10 seconds, 20 seconds, 40 seconds, 60 seconds, 90 seconds, 120 seconds, 180 seconds and 240 seconds in a 2-composite x 2-light curing unit design. A microhybrid (Point 4, P4) or microfill resin (Heliomolar, HM) composite was cured with a QTH or LED light curing unit and then stored in the dark for 24 hours prior to KH testing. Non-linear regression was calculated with: H = (H(MAX)-c)(1-e(-kt)) +c, H(MAX) = maximum hardness (a theoretical asymptotic value), c = constant (t = 0), k = rate constant and t = exposure duration describes the relationship between radiant exposure (irradiance x time) and HRs. Exposure durations for HR-H(MAX) = 80% were calculated. Two-sample t-tests for pairwise comparisons evaluated relative performance of the light curing units for similar surface x composite x exposure (10-90s). A good measure of goodness-of-fit of the non-linear regression, r2, ranged from 0.68-0.95. (mean = 0.82). Microhybrid (P4) exposure to achieve HR-H(MAX = 80% was 21 seconds for QTH and 34 seconds for the LED light curing unit. Corresponding values for microfill (HM) were 71 and 74 seconds, respectively. P4 HR-H(MAX) of LED vs QTH was statistically similar for 10 to 40 seconds, while HM HR-H(MAX) of LED was significantly lower than QTH for 10 to 40 seconds. It was concluded that redefined hardness ratios based on maximum hardness used in conjunction with non-linear regression
Teixeira, Vitor H; Cunha, Carlos A; Machuqueiro, Miguel; Oliveira, A Sofia F; Victor, Bruno L; Soares, Cláudio M; Baptista, António M
2005-08-04
Poisson-Boltzmann (PB) models are a fast and common tool for studying electrostatic processes in proteins, particularly their ionization equilibrium (protonation and/or reduction), often yielding quite good results when compared with more detailed models. Yet, they are conceptually very simple and necessarily approximate, their empirical character being most evident when it comes to the choice of the dielectric constant assigned to the protein region. The present study analyzes several factors affecting the ability of PB-based methods to model protein ionization equilibrium. We give particular attention to a suggestion made by Warshel and co-workers (e.g., Sham et al. J. Phys. Chem. B 1997, 101, 4458) of using different protein dielectric constants for computing the individual (site) and the pairwise (site-site) terms of the ionization free energies. Our prediction of pK(a) values for several proteins indicates that no advantage is obtained by such a procedure, even for sites that are buried and/or display large pK(a) shifts relative to the solution values. In particular, the present methodology gives the best predictions using a dielectric constant around 20, for shifted/buried and nonshifted/exposed sites alike. The similarities and differences between the PB model and Warshel's PDLD/S model are discussed, as well as the reasons behind their apparently discrepant results. The present PB model is shown to predict also good reduction potentials in redox proteins.
Hu, Zhan-Hong; Shi, Ai-Ming; Hu, Duan-Min; Bao, Jun-Jie
2017-01-01
Background/Aim: To compare the efficacy and tolerance of different proton pump inhibitors (PPIs) in different doses for patients with duodenal ulcers. Materials and Methods: An electronic database was searched to collect all randomized clinical trials (RCTs), and a pairwise and network meta-analysis were performed. Results: A total of 24 RCTs involving 6188 patients were included. The network meta-analysis showed that there were no significant differences for the 4-week healing rate of duodenal ulcer treated with different PPI regimens except pantoprazle 40 mg/d versus lansoprazole 15 mg/d [Relative risk (RR) = 3.57; 95% confidence interval (CI) = 1.36–10.31)] and lansoprazole 30 mg/d versus lansoprazole 15 mg/d (RR = 2.45; 95% CI = 1.01–6.14). In comparison with H2 receptor antagonists (H2 RA), pantoprazole 40 mg/d and lansoprazole 30 mg/d significantly increase the healing rate (RR = 2.96; 95% CI = 1.78–5.14 and RR = 2.04; 95% CI = 1.13–3.53, respectively). There was no significant difference for the rate of adverse events between different regimens, including H2 RA for a duration of 4-week of follow up. Conclusion: There was no significant difference for the efficacy and tolerance between the ordinary doses of different PPIs with the exception of lansoprazle 15 mg/d. PMID:28139495
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Scott Barlowe
2017-06-01
Full Text Available Understanding how proteins mutate is critical to solving a host of biological problems. Mutations occur when an amino acid is substituted for another in a protein sequence. The set of likelihoods for amino acid substitutions is stored in a matrix and input to alignment algorithms. The quality of the resulting alignment is used to assess the similarity of two or more sequences and can vary according to assumptions modeled by the substitution matrix. Substitution strategies with minor parameter variations are often grouped together in families. For example, the BLOSUM and PAM matrix families are commonly used because they provide a standard, predefined way of modeling substitutions. However, researchers often do not know if a given matrix family or any individual matrix within a family is the most suitable. Furthermore, predefined matrix families may inaccurately reflect a particular hypothesis that a researcher wishes to model or otherwise result in unsatisfactory alignments. In these cases, the ability to compare the effects of one or more custom matrices may be needed. This laborious process is often performed manually because the ability to simultaneously load multiple matrices and then compare their effects on alignments is not readily available in current software tools. This paper presents SubVis, an interactive R package for loading and applying multiple substitution matrices to pairwise alignments. Users can simultaneously explore alignments resulting from multiple predefined and custom substitution matrices. SubVis utilizes several of the alignment functions found in R, a common language among protein scientists. Functions are tied together with the Shiny platform which allows the modification of input parameters. Information regarding alignment quality and individual amino acid substitutions is displayed with the JavaScript language which provides interactive visualizations for revealing both high-level and low-level alignment
Lefrançois, Philippe; Rockmill, Beth; Xie, Pingxing; Roeder, G Shirleen; Snyder, Michael
2016-10-01
During meiosis, chromosomes undergo a homology search in order to locate their homolog to form stable pairs and exchange genetic material. Early in prophase, chromosomes associate in mostly non-homologous pairs, tethered only at their centromeres. This phenomenon, conserved through higher eukaryotes, is termed centromere coupling in budding yeast. Both initiation of recombination and the presence of homologs are dispensable for centromere coupling (occurring in spo11 mutants and haploids induced to undergo meiosis) but the presence of the synaptonemal complex (SC) protein Zip1 is required. The nature and mechanism of coupling have yet to be elucidated. Here we present the first pairwise analysis of centromere coupling in an effort to uncover underlying rules that may exist within these non-homologous interactions. We designed a novel chromosome conformation capture (3C)-based assay to detect all possible interactions between non-homologous yeast centromeres during early meiosis. Using this variant of 3C-qPCR, we found a size-dependent interaction pattern, in which chromosomes assort preferentially with chromosomes of similar sizes, in haploid and diploid spo11 cells, but not in a coupling-defective mutant (spo11 zip1 haploid and diploid yeast). This pattern is also observed in wild-type diploids early in meiosis but disappears as meiosis progresses and homologous chromosomes pair. We found no evidence to support the notion that ancestral centromere homology plays a role in pattern establishment in S. cerevisiae post-genome duplication. Moreover, we found a role for the meiotic bouquet in establishing the size dependence of centromere coupling, as abolishing bouquet (using the bouquet-defective spo11 ndj1 mutant) reduces it. Coupling in spo11 ndj1 rather follows telomere clustering preferences. We propose that a chromosome size preference for centromere coupling helps establish efficient homolog recognition.
Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang
2016-01-01
In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.
Coan, Heather B.; Youker, Robert T.
2017-01-01
Understanding how proteins mutate is critical to solving a host of biological problems. Mutations occur when an amino acid is substituted for another in a protein sequence. The set of likelihoods for amino acid substitutions is stored in a matrix and input to alignment algorithms. The quality of the resulting alignment is used to assess the similarity of two or more sequences and can vary according to assumptions modeled by the substitution matrix. Substitution strategies with minor parameter variations are often grouped together in families. For example, the BLOSUM and PAM matrix families are commonly used because they provide a standard, predefined way of modeling substitutions. However, researchers often do not know if a given matrix family or any individual matrix within a family is the most suitable. Furthermore, predefined matrix families may inaccurately reflect a particular hypothesis that a researcher wishes to model or otherwise result in unsatisfactory alignments. In these cases, the ability to compare the effects of one or more custom matrices may be needed. This laborious process is often performed manually because the ability to simultaneously load multiple matrices and then compare their effects on alignments is not readily available in current software tools. This paper presents SubVis, an interactive R package for loading and applying multiple substitution matrices to pairwise alignments. Users can simultaneously explore alignments resulting from multiple predefined and custom substitution matrices. SubVis utilizes several of the alignment functions found in R, a common language among protein scientists. Functions are tied together with the Shiny platform which allows the modification of input parameters. Information regarding alignment quality and individual amino acid substitutions is displayed with the JavaScript language which provides interactive visualizations for revealing both high-level and low-level alignment information. PMID:28674656
Gusso, André; Burnham, Nancy A.
2016-09-01
It has long been recognized that stochastic surface roughness can considerably change the van der Waals (vdW) force between interacting surfaces and particles. However, few analytical expressions for the vdW force between rough surfaces have been presented in the literature. Because they have been derived using perturbative methods or the proximity force approximation the expressions are valid when the roughness correction is small and for a limited range of roughness parameters and surface separation. In this work, a nonperturbative approach, the effective density method (EDM) is proposed to circumvent some of these limitations. The method simplifies the calculations of the roughness correction based on pairwise summation (PWS), and allows us to derive simple expressions for the vdW force and energy between two semispaces covered with stochastic rough surfaces. Because the range of applicability of PWS and, therefore, of our results, are not known a priori, we compare the predictions based on the EDM with those based on the multilayer effective medium model, whose range of validity can be defined more properly and which is valid when the roughness correction is comparatively large. We conclude that the PWS can be used for roughness characterized by a correlation length of the order of its rms amplitude, when this amplitude is of the order of or smaller than a few nanometers, and only for typically insulating materials such as silicon dioxide, silicon nitride, diamond, and certain glasses, polymers and ceramics. The results are relevant for the correct modeling of systems where the vdW force can play a significant role such as micro and nanodevices, for the calculation of the tip-sample force in atomic force microscopy, and in problems involving adhesion.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
李仁杰; 路紫
2011-01-01
The virtual expression for theme park, using the virtual reality (VR) technology, has achieved not only for achieving a high degree of realism for every element of the landscape in appearance texture, but also for showing landscape construction, landscape evolution and the man-land relationship and describing the geo-spatial pattern. The latter is paid less attention by researchers. Designing the landscape model based on semantic features in Virtual Geographic Environments (VGE) is a good idea to achieve the two-level modeling, and a case of a water and soil conservation technology park, located in the Yanqing County, Beijing, China, is selected to demonstrate the idea. The water and soil conservation technology park is a special form of theme park, which is designed for the experiments of water and soil conservation technology, popular science education of protection of ecological environment, and leisure and recreation activities. But the park' s functions are greatly restricted by the park's area, location and ecological capacity. So it cannot satisfy the multi functional needs for the education of ecological environment protection, technology demonstration and ecotourism development. The authors design a classification system of themes and virtual objects in the water and soil conservation technology park, and build a layer of details (LOD) model for describing the theme park in the computer virtual environment based on semantic context of the ecological landscape. The LOD model can show the features and landscapes of the theme park at different view scales such as the whole view, middle scale view, and some special partial views,even a special feature view in the virtual environment. The LOD model can also construct the virtual environment based on different themes and functions or design a special sight-seeing route by the describing of different scale LOD models and other landscape features together. This case study is done in the ArcGIS 9.2 and the Skyline
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Angie Hennessy
2009-04-01
Full Text Available
A methodology is proposed to develop a measuring instrument (metric for evaluating subjects from a population that cannot provide data to facilitate the development of such a metric (e.g. pre-term infants in the neonatal intensive care unit. Central to this methodology is the employment of an expert group that decides on the items to be included in the metric, the weights assigned to these items, and an index associated with the Likert scale points for each item. The experts supply pairwise ratios of an importance between items, and the geometric mean method is applied to these to establish the item weights – a well-established procedure in multi-criteria decision analysis. The ratios are found by having a managed discussion before asking the members of the expert panel to mark a visual analogue scale for each item.
Opsomming
‘n Metode word aangebied waarmee ‘n meetinstrument (metriek ontwikkel kan word vir die evaluering van persone uit ‘n populasie wat nie self die data vir die ontwikkeling van die metriek kan voorsien nie (bv. vroeggebore babas in die neonatale intensiewe sorgeenheid. Die kern van hierdie werkswyse is die gebruik van ‘n deskundige groep wat die items vir die meetinstrument kies, gewigte aan die items toeken, en vir elke item ‘n indeks opstel wat met die Likert-skaal punte geassosieer word. Die deskundiges het paarsgewyse verhoudings tussen items verskaf en die meetkundig-gemiddelde metode is hierop toegepas om die itemgewigte te verkry – ‘n goedgevestigde gebruik in meerdoelwitbesluitkunde. Die paarsgewyse verhoudings is gewerf deur die deskundiges, na ‘n bestuurde bespreking, vir elke item ‘n visuele analoogskaal te laat invul.
How to cite this article:
Becker, P.J., Wolvaardt, J.S., Hennessy, A. & Maree, C., 2009, 'A composite score for a measuring instrument utilising re-scaled Likert values and item weights from matrices of pair wise ratios
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
郝琳; 马长林
2014-01-01
基于Matlab/GUI实现了AHP中的关键环节：成对比较矩阵的一致性检验与修正。软件界面简洁友好，操作便利，为应用AHP解决各领域的决策问题提供了便利。%This paper realizes the key in AHP based on Matlab/GUI:consistency test and correction of pairwise comparison matrix.The interface of software is terse,friendly and convenient. It can provide convenience for decision problems applying the AHP.
Distribution of Errors Reported by LOD2 LODStats Project
Hoekstra, R.J.; Groth, P.T.
DescriptionResults of discussion groups at the Linked Science Workshop 2013 held at the International Semantic Web Conference. (http://linkedscience.org/events/lisc2013/)Participants were asked to develop a matrices about how semantic web/linked data solutions can help address reproducbility/re* pro
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
两两NQD列部分和之和的弱大数定律%Weak law of large numbers for sum of partial sums of pairwise NQD sequences
俞周晓; 王文胜
2012-01-01
The sum of partial sums was extensively applied to mathematics and economics, such as random walk. In this paper, the weak law of large numbers for sum of partial sums of pairwise NQD sequences is discussed. Some results in the literature are improved and extended from the weak law of large numbers for partial sums of pairwise NQD sequences to that for the sum of partial sums case.%部分和之和在实际问题如随机游动、时间序列分析、破产理论中有着广泛的应用.研究同分布和不同分布情况下,两两NQD随机变量序列部分和之和Tn=n∑i=1Si的弱大数定律,其中Sn=n∑i=1Xi,将两两NQD随机变量序列部分和的弱大数定律推广到了部分和之和的情形.
Pairwise Influence Quantification Bayes Model for Online Social Network%在线社交网络用户间影响量化的贝叶斯模型
戴云晶; 邓倩妮
2013-01-01
Traditional researches of pairwise influence quantification are focus on social network such as coauthor network. So they don' t fully apply to current online social network. Furthermore, existing model which are proposed for online social network are simple, one—sided and lack of accuracy. So we propose a Bayes model which integrated all useful information online social network provides to quantify the pairwise influence between two users. Through analyzing and comparing the influential found by these models, we find that our model is more accurate and comprehensive than existing models.%传统的模型专注于合作网络等社交网络,并不完全适用于在线社交网络.而现有的针对在线社交网络的模型简单,片面,缺乏准确性.因此我们提出了基于贝叶斯观点的模型,通过集成在线社交网络提供的有用信息来量化用户间的影响.通过分析对比贝叶斯模型与现有模型找到的影响者,我们发现贝叶斯模型比现有的模型更准确和全面.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.