Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
A Pseudo-Boolean Solution to the Maximum Quartet Consistency Problem
Morgado, Antonio
2008-01-01
Determining the evolutionary history of a given biological data is an important task in biological sciences. Given a set of quartet topologies over a set of taxa, the Maximum Quartet Consistency (MQC) problem consists of computing a global phylogeny that satisfies the maximum number of quartets. A number of solutions have been proposed for the MQC problem, including Dynamic Programming, Constraint Programming, and more recently Answer Set Programming (ASP). ASP is currently the most efficient approach for optimally solving the MQC problem. This paper proposes encoding the MQC problem with pseudo-Boolean (PB) constraints. The use of PB allows solving the MQC problem with efficient PB solvers, and also allows considering different modeling approaches for the MQC problem. Initial results are promising, and suggest that PB can be an effective alternative for solving the MQC problem.
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
Wang, Yong Tai; Vrongistinos, Konstantinos Dino; Xu, Dali
2008-08-01
The purposes of this study were to examine the consistency of wheelchair athletes' upper-limb kinematics in consecutive propulsive cycles and to investigate the relationship between the maximum angular velocities of the upper arm and forearm and the consistency of the upper-limb kinematical pattern. Eleven elite international wheelchair racers propelled their own chairs on a roller while performing maximum speeds during wheelchair propulsion. A Qualisys motion analysis system was used to film the wheelchair propulsive cycles. Six reflective markers placed on the right shoulder, elbow, wrist joints, metacarpal, wheel axis, and wheel were automatically digitized. The deviations in cycle time, upper-arm and forearm angles, and angular velocities among these propulsive cycles were analyzed. The results demonstrated that in the consecutive cycles of wheelchair propulsion the increased maximum angular velocity may lead to increased variability in the upper-limb angular kinematics. It is speculated that this increased variability may be important for the distribution of load on different upper-extremity muscles to avoid the fatigue during wheelchair racing.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Jumper, John M; Sosnick, Tobin R
2016-01-01
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom, the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape, resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood type method to parameterize the side chain model using...
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Can the maximum entropy principle be explained as a consistency requirement?
Uffink, J.
2001-01-01
The principle of maximum entropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathema
15 CFR 930.32 - Consistent to the maximum extent practicable.
2010-01-01
... context of the discretionary powers residing in such agencies. Accordingly, whenever legally permissible... may deviate from full consistency with an approved management program when such deviation is justified... program. Any deviation shall be the minimum necessary to address the exigent circumstance....
Online cross-validation-based ensemble learning.
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-05-04
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
Cross-validating a bidimensional mathematics anxiety scale.
Haiyan Bai
2011-03-01
The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.
Optimal Cross-Validation Split Ratio: Experimental Investigation
Goutte, Cyril; Larsen, Jan
1998-01-01
Cross-validation is a common method for assessing the generalisation ability of a model in order to tune a regularisation parameter or otherhyper-parameters of a learning process. The use of cross-validation requires to set yet an additional parameter, the split rati. While a few texts haveinvest......Cross-validation is a common method for assessing the generalisation ability of a model in order to tune a regularisation parameter or otherhyper-parameters of a learning process. The use of cross-validation requires to set yet an additional parameter, the split rati. While a few texts...
Cross-validation criteria for SETAR model selection
de Gooijer, J.G.
2001-01-01
Three cross-validation criteria, denoted C, C_c, and C_u are proposed for selecting the orders of a self-exciting threshold autoregressive SETAR) model when both the delay and the threshold value are unknown. The derivatioon of C is within a natural cross-validation framework. The crietion C_c is si
A cross-validation package driving Netica with python
Fienen, Michael N.; Plant, Nathaniel G.
2014-01-01
Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).
Benchmarking protein classification algorithms via supervised cross-validation
Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.
2008-01-01
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-o
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and
Cross-validation model assessment for modular networks
Kawamoto, Tatsuro
2016-01-01
Model assessment of the stochastic block model is a crucial step in identification of modular structures in networks. Although this has typically been done according to the principle that a parsimonious model with a large marginal likelihood or a short description length should be selected, another principle is that a model with a small prediction error should be selected. We show that the leave-one-out cross-validation estimate of the prediction error can be efficiently obtained using belief propagation for sparse networks. Furthermore, the relations among the objectives for model assessment enable us to determine the exact cause of overfitting.
Cross-Validation, Bootstrap, and Support Vector Machines
Masaaki Tsujitani
2011-01-01
Full Text Available This paper considers the applications of resampling methods to support vector machines (SVMs. We take into account the leaving-one-out cross-validation (CV when determining the optimum tuning parameters and bootstrapping the deviance in order to summarize the measure of goodness-of-fit in SVMs. The leaving-one-out CV is also adapted in order to provide estimates of the bias of the excess error in a prediction rule constructed with training samples. We analyze the data from a mackerel-egg survey and a liver-disease study.
LASSO with cross-validation for genomic selection.
Usai, M Graziano; Goddard, Mike E; Hayes, Ben J
2009-12-01
We used a least absolute shrinkage and selection operator (LASSO) approach to estimate marker effects for genomic selection. The least angle regression (LARS) algorithm and cross-validation were used to define the best subset of markers to include in the model. The LASSO-LARS approach was tested on two data sets: a simulated data set with 5865 individuals and 6000 Single Nucleotide Polymorphisms (SNPs); and a mouse data set with 1885 individuals genotyped for 10 656 SNPs and phenotyped for a number of quantitative traits. In the simulated data, three approaches were used to split the reference population into training and validation subsets for cross-validation: random splitting across the whole population; random sampling of validation set from the last generation only, either within or across families. The highest accuracy was obtained by random splitting across the whole population. The accuracy of genomic estimated breeding values (GEBVs) in the candidate population obtained by LASSO-LARS was 0.89 with 156 explanatory SNPs. This value was higher than those obtained by Best Linear Unbiased Prediction (BLUP) and a Bayesian method (BayesA), which were 0.75 and 0.84, respectively. In the mouse data, 1600 individuals were randomly allocated to the reference population. The GEBVs for the remaining 285 individuals estimated by LASSO-LARS were more accurate than those obtained by BLUP and BayesA for weight at six weeks and slightly lower for growth rate and body length. It was concluded that LASSO-LARS approach is a good alternative method to estimate marker effects for genomic selection, particularly when the cost of genotyping can be reduced by using a limited subset of markers.
M. Angeles ePérez-Cabal
2012-02-01
Full Text Available The impact of extent of genetic relatedness on accuracy of genome-enabled predictions was assessed using a dairy cattle population and alternative cross-validation (CV strategies were compared. The CV layouts consisted of training and testing sets obtained from either random allocation of individuals (RAN or from a kernel-based clustering of individuals using the additive relationship matrix, to obtain two subsets that were as unrelated as possible (UNREL, as well as a layout based on stratification by generation (GEN. The UNREL layout decreased the average genetic relationships between training and testing animals but produced similar accuracies to the RAN design, which were about 15% higher than in the GEN setting. Results indicate that the CV structure can have an important effect on the accuracy of whole-genome predictions. However, the connection between average genetic relationships across training and testing sets and the estimated predictive ability is not straightforward, and may depend also on the kind of relatedness that exists between the two subsets and on the heritability of the trait. For high heritability traits, close relatives such as parents and full sibs make the greatest contributions to accuracy, which can be compensated by half-sibs or grandsires in the case of lack of close relatives. However, for the low heritability traits the inclusion of close relatives is crucial and including more relatives of various types in the training set tends to lead to greater accuracy. In practice, cross-validation designs should resemble the intended use of the predictive models, e.g. within or between family predictions, or within or across generation predictions, such that estimation of predictive ability is consistent with the actual application to be considered.
Lahti Satu
2008-03-01
Full Text Available Abstract Objective To assess the factorial structure and construct validity for the Chinese version of the Modified Dental Anxiety Scale (MDAS. Materials and methods A cross-sectional survey was conducted in March 2006 from adults in the Beijing area. The questionnaire consisted of sections to assess for participants' demographic profile and dental attendance patterns, the Chinese MDAS and the anxiety items from the Hospital Anxiety and Depression Scale (HADS. The analysis was conducted in two stages using confirmatory factor analysis and structural equation modelling. Cross validation was tested with a North West of England comparison sample. Results 783 questionnaires were successfully completed from Beijing, 468 from England. The Chinese MDAS consisted of two factors: anticipatory dental anxiety (ADA and treatment dental anxiety (TDA. Internal consistency coefficients (tau non-equivalent were 0.74 and 0.86 respectively. Measurement properties were virtually identical for male and female respondents. Relationships of the Chinese MDAS with gender, age and dental attendance supported predictions. Significant structural parameters between the two sub-scales (negative affectivity and autonomic anxiety of the HADS anxiety items and the two newly identified factors of the MDAS were confirmed and duplicated in the comparison sample. Conclusion The Chinese version of the MDAS has good psychometric properties and has the ability to assess, briefly, overall dental anxiety and two correlated but distinct aspects.
A cross-validated cytoarchitectonic atlas of the human ventral visual stream.
Rosenke, M; Weiner, K S; Barnett, M A; Zilles, K; Amunts, K; Goebel, R; Grill-Spector, K
2017-02-14
The human ventral visual stream consists of several areas considered processing stages essential for perception and recognition. A fundamental microanatomical feature differentiating areas is cytoarchitecture, which refers to the distribution, size, and density of cells across cortical layers. Because cytoarchitectonic structure is measured in 20-micron-thick histological slices of postmortem tissue, it is difficult to assess (a) how anatomically consistent these areas are across brains and (b) how they relate to brain parcellations obtained with prevalent neuroimaging methods, acquired at the millimeter and centimeter scale. Therefore, the goal of this study was to (a) generate a cross-validated cytoarchitectonic atlas of the human ventral visual stream on a whole brain template that is commonly used in neuroimaging studies and (b) to compare this atlas to a recently published retinotopic parcellation of visual cortex (Wang, 2014). To achieve this goal, we generated an atlas of eight cytoarchitectonic areas: four areas in the occipital lobe (hOc1-hOc4v) and four in the fusiform gyrus (FG1-FG4) and tested how alignment technique affects the accuracy of the atlas. Results show that both cortex-based alignment (CBA) and nonlinear volumetric alignment (NVA) generate an atlas with better cross-validation performance than affine volumetric alignment (AVA). Additionally, CBA outperformed NVA in 6/8 of the cytoarchitectonic areas. Finally, the comparison of the cytoarchitectonic atlas to a retinotopic atlas shows a clear correspondence between cytoarchitectonic and retinotopic areas in the ventral visual stream. The successful performance of CBA suggests a coupling between cytoarchitectonic areas and macroanatomical landmarks in the human ventral visual stream, and furthermore that this coupling can be utilized towards generating an accurate group atlas. In addition, the coupling between cytoarchitecture and retinotopy highlights the potential use of this atlas in
Tripathi, Brijesh; Sircar, Ratna
2016-09-01
The maximum performance of nc-Si:H/a-Si:H quantum well solar cell is theoretically evaluated by studying the spectral absorption of incident radiation with respect to the number of inserted nc-Si:H quantum well layers. Fundamental intrinsic properties of a-Si:H and nc-Si:H materials reported in literature have been used to evaluate the performance parameters. Enhanced spectral absorption is recorded due to insertion of nc-Si:H quantum well layers in the intrinsic region of a-Si:H solar cell. By inserting 50 QW layers of nc-Si:H in the intrinsic region of the a-Si:H solar cell, the short-circuit current density (JSC) increases by ∼100% as compared to the baseline whereas the open-circuit voltage (VOC) decreases by ∼38%. The decrease in VOC is explained on the basis of quasi-Fermi level separation under the illuminated state of solar cell. Theoretical maximum efficiency, having the combined effect of the increase in JSC and decrease in VOC, has increased by ∼24% in comparison with the baseline due to the use of QW as calculated using ideal carrier lifetime value. With a realistic carrier lifetime of the state-of-the-art a-Si:H solar cells, the addition of QWs do not yield any significant gain. From this study, it is concluded that a high carrier lifetime is required to gain a noteworthy benefit from the nc-Si:H/a-Si:H QWs.
Quentin Noirhomme
2014-01-01
Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven
2014-01-01
Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
夏天; 孔繁超
2008-01-01
This paper proposes some regularity conditions.On the basis of the proposed regularity conditions,we show the strong consistency of maximum quasi-likelihood estimation (MQLE)in quasi-likelihood nonlinear models (QLNM).Our results may he regarded as a further generalization of the relevant results in Ref.[4].
Evaluation and cross-validation of Environmental Models
Lemaire, Joseph
Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore
Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation
LIU ZhiChao; CAI WenSheng; SHAO XueGuang
2008-01-01
An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that, in random test (Monte Carlo) cross-validation, the probability of outliers pre-senting in good models with smaller prediction residual error sum of squares (PRESS) or in bad mod-els with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first, then the models are sorted by the PRESS, and at last the outliers are recognized according to the accumulative probability of each sam-ple in the sorted models. For validation of the proposed method, four data sets, including three pub-lished data sets and a large data set of tobacco lamina, were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out (LOO) cross validation method.
Cross-validation of component models: a critical look at current methods.
Bro, R; Kjeldahl, K; Smilde, A K; Kiers, H A L
2008-03-01
In regression, cross-validation is an effective and popular approach that is used to decide, for example, the number of underlying features, and to estimate the average prediction error. The basic principle of cross-validation is to leave out part of the data, build a model, and then predict the left-out samples. While such an approach can also be envisioned for component models such as principal component analysis (PCA), most current implementations do not comply with the essential requirement that the predictions should be independent of the entity being predicted. Further, these methods have not been properly reviewed in the literature. In this paper, we review the most commonly used generic PCA cross-validation schemes and assess how well they work in various scenarios.
Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation
2008-01-01
An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that,in random test(Monte Carlo) cross-validation,the probability of outliers presenting in good models with smaller prediction residual error sum of squares(PRESS) or in bad models with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first,then the models are sorted by the PRESS,and at last the outliers are recognized according to the accumulative probability of each sample in the sorted models. For validation of the proposed method,four data sets,including three published data sets and a large data set of tobacco lamina,were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out(LOO) cross validation method.
AnL1 smoothing spline algorithm with cross validation
Bosworth, Ken W.; Lall, Upmanu
1993-08-01
We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.
Ensemble Kalman filter regularization using leave-one-out data cross-validation
Rayo Schiappacasse, Lautaro Jerónimo
2012-09-19
In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.
Accelerating cross-validation with total variation and its application to super-resolution imaging
Obuchi, Tomoyuki; Akiyama, Kazunori; Kabashima, Yoshiyuki
2016-01-01
We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by $\\ell_1$-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.
A New Symptom Model for Autism Cross-Validated in an Independent Sample
Boomsma, A.; Van Lang, N. D. J.; De Jonge, M. V.; De Bildt, A. A.; Van Engeland, H.; Minderaa, R. B.
2008-01-01
Background: Results from several studies indicated that a symptom model other than the DSM triad might better describe symptom domains of autism. The present study focused on a) investigating the stability of a new symptom model for autism by cross-validating it in an independent sample and b) examining the invariance of the model regarding three…
Cross-Validation of the Risk Matrix 2000 Sexual and Violent Scales
Craig, Leam A.; Beech, Anthony; Browne, Kevin D.
2006-01-01
The predictive accuracy of the newly developed actuarial risk measures Risk Matrix 2000 Sexual/Violence (RMS, RMV) were cross validated and compared with two risk assessment measures (SVR-20 and Static-99) in a sample of sexual (n = 85) and nonsex violent (n = 46) offenders. The sexual offense reconviction rate for the sex offender group was 18%…
Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong
Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger
2016-01-01
Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…
Kan, C.C.; Breteler, M.H.M.; Ven, A.H.G.S. van der; Zitman, F.G.
2001-01-01
The aim of this study was to cross-validate the Benzodiazepine Dependence Self-Report Questionnaire (Bendep-SRQ), which reflects the severity of benzodiazepine (BZD) dependence. The Bendep-SRQ, Symptom Checklist-90 (SCL-90) Schedules for Clinical Assessments in Neuropsychiatry (SCAN), and Addiction
Kan, C.C.; Breteler, M.H.M.; Ven, A.H.G.S. van der; Zitman, F.G.
2001-01-01
The aim of this study was to cross-validate the Benzodiazepine Dependence Self-Report Questionnaire (Bendep-SRQ), which reflects the severity of benzodiazepine (BZD) dependence. The Bendep-SRQ, Symptom Checklist-90 (SCL-90) Schedules for Clinical Assessments in Neuropsychiatry (SCAN), and Addiction
Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong
Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger
2016-01-01
Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…
Cross-Validation of a Short Form of the Marlowe-Crowne Social Desirability Scale.
Zook, Avery, II; Sipps, Gary J.
1985-01-01
Presents a cross-validation of Reynolds' short form of the Marlowe-Crowne Social Desirability Scale (N=233). Researchers administered 13 items as a separate entity, calculated Cronbach's Alpha for each sex, and computed test-retest correlation for one group. Concluded that the short form is a viable alternative. (Author/NRB)
李建中; 朱军; 张飞猛; 胡敬坤
2011-01-01
分析影响某型车载炮最大射程地面密集度的主要因素,并对在高低齿弧上的作用力和弹药的影响进行重点探讨,提出提高地面密集度试验精度的措施,为密集度试验方法改进提供了依据.%The main factors influencing the consistency at the maximum range of a type of truckmounted artillery are analyzed.Measures to improve the precision of the consistency test are suggested based on a detailed discussion of the influence of the force acting on the elevating gear arc and ammunition parameters in detail.The work provides a theoretical basis for improving the consistency test method for the truck-mounted artillery.
Thway, Theingi M; Ma, Mark; Lee, Jean; Sloey, Bethlyn; Yu, Steven; Wang, Yow-Ming C; Desilva, Binodh; Graves, Tom
2009-04-05
A case study of experimental and statistical approaches for cross-validating and examining the equivalence of two ligand binding assay (LBA) methods that were employed in pharmacokinetic (PK) studies is presented. The impact of changes in methodology based on the intended use of the methods was assessed. The cross-validation processes included an experimental plan, sample size selection, and statistical analysis with a predefined criterion of method equivalence. The two methods were deemed equivalent if the ratio of mean concentration fell within the 90% confidence interval (0.80-1.25). Statistical consideration of method imprecision was used to choose the number of incurred samples (collected from study animals) and conformance samples (spiked controls) for equivalence tests. The difference of log-transformed mean concentration and the 90% confidence interval for two methods were computed using analysis of variance. The mean concentration ratios of the two methods for the incurred and spiked conformance samples were 1.63 and 1.57, respectively. The 90% confidence limit was 1.55-1.72 for the incurred samples and 1.54-1.60 for the spiked conformance samples; therefore, the 90% confidence interval was not contained within the (0.80-1.25) equivalence interval. When the PK parameters of two studies using each of these two methods were compared, we determined that the therapeutic exposure, AUC((0-168)) and C(max), from Study A/Method 1 was approximately twice that of Study B/Method 2. We concluded that the two methods were not statistically equivalent and that the magnitude of the difference was reflected in the PK parameters in the studies using each method. This paper demonstrates the need for method cross-validation whenever there is a switch in bioanalytical methods, statistical approaches in designing the cross-validation experiments and assessing results, or interpretation of the impact of PK data.
Gene association study with SVM, MLP and cross-validation for the diagnosis of diseases
Junying Zhang; Shenling Liu; Yue Wang
2008-01-01
Gene association study is one of the major challenges of biochip technology both for gene diagnosis where only a gene subset is responsible for some diseases, and for the treatment of the curse of dimensionality which occurs especially in DNA microarray datasets where there are more than thousands of genes and only a few number of experiments (samples). This paper presents a gene selection method by training linear support vector machine (SVM)/nonlinear MLP (multilayer perceptron) classifiers and testing them with cross-validation for finding a gene subset which is optimal/suboptimal for the diagnosis of binary/multiple disease types. Genes are selected with linear SVM classifier for the diagnosis of each binary disease types pair and tested by leave-one-out cross-validation; then, genes in the gene subset initialized by the union of them are deleted one by one by removing the gene which brings the greatest decrease of the generalization power, for samples, on the gene subset after removal, where generalization is measured by training MLPs with leave-one-out and leave-four-out cross-validations. The proposed method was tested with experiments on real DNA microarray MIT data and NCI data. The result shows that it outperforms conventional SNR method in the separability of the data with expression levels on selected genes. For real DNA microarray MIT/NCI data, which is composed of 7129/2308 effective genes with only 72/64 labeled samples belonging to 2/4 disease classes, only 11/6 genes are selected to be diagnostic genes. The selected genes are tested by the classification of samples on these genes with SVM/MLP with leave-one-out/both leave-one-out and leave-four-out cross-validations. The result of no misclassification indicates that the selected genes can be really considered as diagnostic genes for the diagnosis of the corresponding diseases.
Petersen, D.; Naveed, P.; Ragheb, A.; Niedieker, D.; El-Mashtoly, S. F.; Brechmann, T.; Kötting, C.; Schmiegel, W. H.; Freier, E.; Pox, C.; Gerwert, K.
2017-06-01
Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples.
Petersen, D; Naveed, P; Ragheb, A; Niedieker, D; El-Mashtoly, S F; Brechmann, T; Kötting, C; Schmiegel, W H; Freier, E; Pox, C; Gerwert, K
2017-06-15
Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples. Copyright
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Charlotte Soneson
Full Text Available With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects" as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies.The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects.We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control' or group 2 (e.g., 'treated'. We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects.We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM, k-nearest neighbors (kNN and Random Forests (RF. Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
邓春亮; 胡南辉
2012-01-01
在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性，并得到其收敛于真值βo的速度为Op（λn^-1/2），其中λn（λ^-n）为方阵Sn=n∑i=1XiX^11的最小（最大）特征值．%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els （GLMs）. Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp（λn^-1/2）,λn（^λn） which denotes the smallest （Maximum）eigervalue of the matrixSn =n∑i=1XiX^11,
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data
夏天; 孔繁超
2008-01-01
本文我们提出了一些正则条件,这些条件减弱了Zhu and Wei(1997)文的条件.基于所提的正则条件,我们证明了指数族非线性模型参数最大似然估计的相合性和渐近正态性.我们的结果可被认为是Zhu and Wei(1997)工作的进一步改进.%This paper proposes some regularity conditions which weaken those given by Zhu & Wei (1997).On the basis of the proposed regularity conditions,the existence,the strong consistency and the asymptotic normality of maximum likelihood estimation(MLE)are proved in exponential family nonlinear models(EFNMs).Our results may be regarded as a further improvement of the work of Zhu & Wei(1997).
Cross-validation analysis of bias models in Bayesian multi-model projections of climate
Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.
2017-03-01
Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.
Asymptotic optimality and efficient computation of the leave-subject-out cross-validation
Xu, Ganggang
2012-12-01
Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.
Error criteria for cross validation in the context of chaotic time series prediction.
Lim, Teck Por; Puthusserypady, Sadasivan
2006-03-01
The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.
Variance Estimation Using Refitted Cross-validation in Ultrahigh Dimensional Regression
Fan, Jianqing; Hao, Ning
2010-01-01
Variance estimation is a fundamental problem in statistical modeling. In ultrahigh dimensional linear regressions where the dimensionality is much larger than sample size, traditional variance estimation techniques are not applicable. Recent advances on variable selection in ultrahigh dimensional linear regressions make this problem accessible. One of the major problems in ultrahigh dimensional regression is the high spurious correlation between the unobserved realized noise and some of the predictors. As a result, the realized noises are actually predicted when extra irrelevant variables are selected, leading to serious underestimate of the noise level. In this paper, we propose a two-stage refitted procedure via a data splitting technique, called refitted cross-validation (RCV), to attenuate the influence of irrelevant variables with high spurious correlations. Our asymptotic results show that the resulting procedure performs as well as the oracle estimator, which knows in advance the mean regression functi...
Song, Q Chelsea; Wee, Serena; Newman, Daniel A
2017-07-27
To reduce adverse impact potential and improve diversity outcomes from personnel selection, one promising technique is De Corte, Lievens, and Sackett's (2007) Pareto-optimal weighting strategy. De Corte et al.'s strategy has been demonstrated on (a) a composite of cognitive and noncognitive (e.g., personality) tests (De Corte, Lievens, & Sackett, 2008) and (b) a composite of specific cognitive ability subtests (Wee, Newman, & Joseph, 2014). Both studies illustrated how Pareto-weighting (in contrast to unit weighting) could lead to substantial improvement in diversity outcomes (i.e., diversity improvement), sometimes more than doubling the number of job offers for minority applicants. The current work addresses a key limitation of the technique-the possibility of shrinkage, especially diversity shrinkage, in the Pareto-optimal solutions. Using Monte Carlo simulations, sample size and predictor combinations were varied and cross-validated Pareto-optimal solutions were obtained. Although diversity shrinkage was sizable for a composite of cognitive and noncognitive predictors when sample size was at or below 500, diversity shrinkage was typically negligible for a composite of specific cognitive subtest predictors when sample size was at least 100. Diversity shrinkage was larger when the Pareto-optimal solution suggested substantial diversity improvement. When sample size was at least 100, cross-validated Pareto-optimal weights typically outperformed unit weights-suggesting that diversity improvement is often possible, despite diversity shrinkage. Implications for Pareto-optimal weighting, adverse impact, sample size of validation studies, and optimizing the diversity-job performance tradeoff are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Pérez-Cabal, M. Angeles; Vazquez, Ana I.; Gianola, Daniel; Rosa, Guilherme J. M.; Weigel, Kent A.
2012-01-01
The impact of extent of genetic relatedness on accuracy of genome-enabled predictions was assessed using a dairy cattle population and alternative cross-validation (CV) strategies were compared. The CV layouts consisted of training and testing sets obtained from either random allocation of individuals (RAN) or from a kernel-based clustering of individuals using the additive relationship matrix, to obtain two subsets that were as unrelated as possible (UNREL), as well as a layout based on stratification by generation (GEN). The UNREL layout decreased the average genetic relationships between training and testing animals but produced similar accuracies to the RAN design, which were about 15% higher than in the GEN setting. Results indicate that the CV structure can have an important effect on the accuracy of whole-genome predictions. However, the connection between average genetic relationships across training and testing sets and the estimated predictive ability is not straightforward, and may depend also on the kind of relatedness that exists between the two subsets and on the heritability of the trait. For high heritability traits, close relatives such as parents and full-sibs make the greatest contributions to accuracy, which can be compensated by half-sibs or grandsires in the case of lack of close relatives. However, for the low heritability traits the inclusion of close relatives is crucial and including more relatives of various types in the training set tends to lead to greater accuracy. In practice, CV designs should resemble the intended use of the predictive models, e.g., within or between family predictions, or within or across generation predictions, such that estimation of predictive ability is consistent with the actual application to be considered. PMID:22403583
Flandoli, F. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Giorgi, E. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy); Aspinall, W.P. [Dept. of Earth Sciences, University of Bristol, and Aspinall and Associates, Tisbury (United Kingdom); Neri, A., E-mail: neri@pi.ingv.it [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy)
2011-10-15
The problem of ranking and weighting experts' performances when quantitative judgments are being elicited for decision support is considered. A new scoring model, the Expected Relative Frequency model, is presented, based on the closeness between central values provided by the expert and known values used for calibration. Using responses from experts in five different elicitation datasets, a cross-validation technique is used to compare this new approach with the Cooke Classical Model, the Equal Weights model, and individual experts. The analysis is performed using alternative reward schemes designed to capture proficiency either in quantifying uncertainty, or in estimating true central values. Results show that although there is only a limited probability that one approach is consistently better than another, the Cooke Classical Model is generally the most suitable for assessing uncertainties, whereas the new ERF model should be preferred if the goal is central value estimation accuracy. - Highlights: > A new expert elicitation model, named Expected Relative Frequency (ERF), is presented. > A cross-validation approach to evaluate the performance of different elicitation models is applied. > The new ERF model shows the best performance with respect to the point-wise estimates.
Reiss, Philip T
2015-08-01
The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs.
Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution.
Wang, Yu; Li, Jihong
2016-08-01
In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the
Rushing, Christel; Bulusu, Anuradha; Hurwitz, Herbert I; Nixon, Andrew B; Pang, Herbert
2015-02-01
A proper internal validation is necessary for the development of a reliable and reproducible prognostic model for external validation. Variable selection is an important step for building prognostic models. However, not many existing approaches couple the ability to specify the number of covariates in the model with a cross-validation algorithm. We describe a user-friendly SAS macro that implements a score selection method and a leave-one-out cross-validation approach. We discuss the method and applications behind this algorithm, as well as details of the SAS macro.
Schick, Simon; Rössler, Ole; Weingartner, Rolf
2016-10-01
Based on a hindcast experiment for the period 1982-2013 in 66 sub-catchments of the Swiss Rhine, the present study compares two approaches of building a regression model for seasonal streamflow forecasting. The first approach selects a single "best guess" model, which is tested by leave-one-out cross-validation. The second approach implements the idea of bootstrap aggregating, where bootstrap replicates are employed to select several models, and out-of-bag predictions provide model testing. The target value is mean streamflow for durations of 30, 60 and 90 days, starting with the 1st and 16th day of every month. Compared to the best guess model, bootstrap aggregating reduces the mean squared error of the streamflow forecast by seven percent on average. Thus, if resampling is anyway part of the model building procedure, bootstrap aggregating seems to be a useful strategy in statistical seasonal streamflow forecasting. Since the improved accuracy comes at the cost of a less interpretable model, the approach might be best suited for pure prediction tasks, e.g. as in operational applications.
Enhancement of light propagation depth in skin: cross-validation of mathematical modeling methods.
Kwon, Kiwoon; Son, Taeyoon; Lee, Kyoung-Joung; Jung, Byungjo
2009-07-01
Various techniques to enhance light propagation in skin have been studied in low-level laser therapy. In this study, three mathematical modeling methods for five selected techniques were implemented so that we could understand the mechanisms that enhance light propagation in skin. The five techniques included the increasing of the power and diameter of a laser beam, the application of a hyperosmotic chemical agent (HCA), and the whole and partial compression of the skin surface. The photon density profile of the five techniques was solved with three mathematical modeling methods: the finite element method (FEM), the Monte Carlo method (MCM), and the analytic solution method (ASM). We cross-validated the three mathematical modeling results by comparing photon density profiles and analyzing modeling error. The mathematical modeling results verified that the penetration depth of light can be enhanced if incident beam power and diameter, amount of HCA, or whole and partial skin compression is increased. In this study, light with wavelengths of 377 nm, 577 nm, and 633 nm was used.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Budka, Marcin; Gabrys, Bogdan
2013-01-01
Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.
Sound quality indicators for urban places in Paris cross-validated by Milan data.
Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre
2015-10-01
A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
A Large-Scale Empirical Evaluation of Cross-Validation and External Test Set Validation in (Q)SAR.
Gütlein, Martin; Helma, Christoph; Karwath, Andreas; Kramer, Stefan
2013-06-01
(Q)SAR model validation is essential to ensure the quality of inferred models and to indicate future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to accept the (Q)SAR model, and to approve its use in real world scenarios as alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model, in particular whether to employ variants of cross-validation or external test set validation, is still under discussion. In this paper, we empirically compare a k-fold cross-validation with external test set validation. To this end we introduce a workflow allowing to realistically simulate the common problem setting of building predictive models for relatively small datasets. The workflow allows to apply the built and validated models on large amounts of unseen data, and to compare the performance of the different validation approaches. The experimental results indicate that cross-validation produces higher performant (Q)SAR models than external test set validation, reduces the variance of the results, while at the same time underestimates the performance on unseen compounds. The experimental results reported in this paper suggest that, contrary to current conception in the community, cross-validation may play a significant role in evaluating the predictivity of (Q)SAR models.
Bihm, Elson M.; Poindexter, Ann R.
1991-01-01
The original factor structure of the Aberrant Behavior Checklist was cross-validated with a U.S. sample of 470 persons with moderate to profound mental retardation (27 percent nonambulatory). Results replicated previous findings, suggesting that the original five factors (irritability, lethargy, stereotypic behavior, hyperactivity, and…
Shirsath, S.S.; Padding, J.T.; Clercx, H.J.H.; Kuipers, J.A.M.
2015-01-01
Three-dimensional particle tracking velocimetry (3D-PTV) is a promising technique to study the behavior of granular flows. The aim of this paper is to cross-validate 3D-PTV against independent or more established techniques, such as particle image velocimetry (PIV), electronic ultrasonic sensor meas
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
William Alves Lima
2006-09-01
Full Text Available The glucose is an important substrate utilizaded during exercise. Accurate measurement of glucose is vital to obtain trustworthy results. The enzymatic spectrophotometer methods are generally considered the “goldstandard” laboratory procedure for measuring of glucose (GEnz, is time consuming, costly, and inappropriate for large scale field testing. Compact and portable glucose monitors (GAccu are quick and easy methods to assess glucose on large numbers of subjects. So, this study aimed to test the cross-validity of GAccu. The sample was composed of 107 men (aged= 35.4±10.7 years; stature= 168.4±6.9 cm; body mass= 73.4±11.2 kg; %fat= 20.9±8.3% – by dual energy x-ray absorptiometry. Blood for measuring fasting glucose was taken in basilar vein (Genz, Bioplus: Bio-2000 and in ring finger (GAccu: Accu-Chek© Advantage©, after a 12-hour overnight fast. GEnz was used as the criterion for cross-validity. Paired t-test shown differences (p RESUMO A glicose é um substrato importante utilizado durante o exercício físico. Medidas acuradas da glicose são fundamentais para a obtenção de resultados confiáveis. O método laboratorial de espectrofotometria enzimática geralmente é considerado o procedimento “padrão ouro” para medir a glicose (GEnz, o qual requer tempo, custo e é inapropriado para o uso em larga escala. Monitores portáteis de glicose (GAccu são rápidos e fáceis para medir a glicose em um grande número de sujeitos. Então, este estudo teve por objetivo testar a validade concorrente do GAccu. A amostra foi composta por 107 homens (idade= 35,4±10,7 anos; estatura= 168,4±6,9 cm; massa corporal= 73,4±11,2 kg; %gordura= 20,9±8,3% – por absortometria de raio-x de dupla energia. O sangue para mensurar a glicose em jejum foi tirado na veia basilar (Genz, Bioplus: Bio-2000 e no dedo anular (GAccu - Accu- Chek© Advantage©, depois de 12h de jejum noturno. O GEnz foi usado como critério para testar a validade
Cross Validation Through Two-dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor; Tay, Keng; Romano, Walter; Li, Shuo
2016-06-08
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
The Cross-Validation in the Dialogue of Mental and Neuroscience
Drozdstoj St. Stoyanov
2009-06-01
Full Text Available The aim of the Validation Theory (VT as a meta-empirical construct is to introduce a new vista in the reorganization of the neuroscience, in its role of a science of the Mind-and-Brain unification. The present study focuses on existing discrepancies and contradictions between the methods of basic neurosciences and those prescribed by the psychological science. Our view is that these discrepancies are based on a high penetration of traditional neuroscience methods into the biological processes, coupled with low extrapolation (experimenting with animal models and vice versa for the psychological and psychopathological methods. A novel epistemological model for integrating psychological and neuroscientific knowledge is proposed. It is represented as a simultaneous investigation of the brain activity with penetrating high resolution functional Magnetic Resonance Imaging and in extenso application of set of psychological tests for exploring correspondence (cross-validation between their compounds. The proposed approach leads to a revision of the neuroscientific and psychological terms, methods and data, followed by a revision of their relative interplay. This would make possible a practical exchange of expensive but objective fMRI with the lower costing psychological instruments (effect of "minimization". The approaches proceeding from VT will infiltrate diagnostics and prevention in psychiatry. On a further stage the pharmaco-psychological monitoring will uncover new opportunities. This proofs' based research and practice represents an integral counterpart of the values-based mental health care. In conclusion VT is an evolutionary corner stone in order to traverse the stage of a Brain-Brain paradigm and to reach the point of development of the Mind-Brain paradigm.
Callie Theron
2006-04-01
Full Text Available Twigge, Theron, Steele and Meiring (2004 concluded that it is possible to develop a predictability index based on a concept originally proposed by Ghiselli (1956, 1960a, 1960b, which correlates with the real residuals derived from the regression of a criterion on one or more predictors. The addition of such a predictability index to the original regression model was found to produce a statistically significant increase in the correlation between the selection battery and the criterion. To be able to convincingly demonstrate the feasibility of enhancing selection utility through the use of predictability indices would, however, require the cross validation of the results obtained on a derivation sample on a holdout sample selected from the same population. The objective of this article consequently is to investigate the extent to which such a predictability index, developed on a validation sample, would successfully cross validate to a holdout sample. Encouragingly positive results were obtained. Recommendations for future research are made.
Cross-validation analysis for genetic evaluation models for ranking in endurance horses.
García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I
2017-06-21
Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In
Luca eIani
2015-01-01
Objective: The Food Craving Questionnaire-Trait (FCQ-T) is commonly used to assess habitual food cravings among individuals. Previous studies have shown that a brief version of this instrument (FCQ-T-r) has good reliability and validity. This article is the first to use Confirmatory factor analysis to examine the psychometric properties of the FCQ-T-r in a cross-validation study. Method: Habitual food cravings, as well as emotion regulation strategies, affective states, and disordered eati...
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models
张戈; 吴黎军
2013-01-01
研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.
Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge
2017-03-01
We argue that isotropic scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by ζ and Script R, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Hijmans, Robert J
2012-03-01
Species distribution models are usually evaluated with cross-validation. In this procedure evaluation statistics are computed from model predictions for sites of presence and absence that were not used to train (fit) the model. Using data for 226 species, from six regions, and two species distribution modeling algorithms (Bioclim and MaxEnt), I show that this procedure is highly sensitive to "spatial sorting bias": the difference between the geographic distance from testing-presence to training-presence sites and the geographic distance from testing-absence (or testing-background) to training-presence sites. I propose the use of pairwise distance sampling to remove this bias, and the use of a null model that only considers the geographic distance to training sites to calibrate cross-validation results for remaining bias. Model evaluation results (AUC) were strongly inflated: the null model performed better than MaxEnt for 45% and better than Bioclim for 67% of the species. Spatial sorting bias and area under the receiver-operator curve (AUC) values increased when using partitioned presence data and random-absence data instead of independently obtained presence-absence testing data from systematic surveys. Pairwise distance sampling removed spatial sorting bias, yielding null models with an AUC close to 0.5, such that AUC was the same as null model calibrated AUC (cAUC). This adjustment strongly decreased AUC values and changed the ranking among species. Cross-validation results for different species are only comparable after removal of spatial sorting bias and/or calibration with an appropriate null model.
Songul CINAROGLU
2015-10-01
Full Text Available Objective: Random Forest (RF is one of machine learning techniques which is used for classification and regression with generating number of trees. There is a debate in the literature about how generating different number of trees reflects classification performance of this method. For this reason the aim of this study is to observe RF performance results by generating different number of trees and changing ''k'' parameter in cross validation while classifying OECD countries according to health expenditures. Material and Methods: In this dataset k-fold cross validation was implemented and Mann Whitney U test was used whether there is a difference in RF performance results using AUC when ''k'' parameter was high (k≥13 or low (k˂13 and while generating different number of trees (50, 100, 150, 200, 250. Results: Results of this study shows that generating different number of trees in RF not makes any significant changes (p˃0.05 in performance results. It was seen that perceived health status was a variable which has more information gain for predicting health expenditures. Conclusion: It is advisable for future studies related with this subject to examine performance results of different datasets which are in different types and sizes.
Chiara Garrovo
2013-01-01
Full Text Available Stem cells are characterized by the ability to renew themselves and to differentiate into specialized cell types, while stem cell therapy is believed to treat a number of different human diseases through either cell regeneration or paracrine effects. Herein, an in vivo and ex vivo near infrared time domain (NIR TD optical imaging study was undertaken to evaluate the migratory ability of murine adipose tissue-derived multipotent adult stem cells [mAT-MASC] after intramuscular injection in mice. In vivo NIR TD optical imaging data analysis showed a migration of DiD-labelled mAT-MASC in the leg opposite the injection site, which was confirmed by a fibered confocal microendoscopy system. Ex vivo NIR TD optical imaging results showed a systemic distribution of labelled cells. Considering a potential microenvironmental contamination, a cross-validation study by multimodality approaches was followed: mAT-MASC were isolated from male mice expressing constitutively eGFP, which was detectable using techniques of immunofluorescence and qPCR. Y-chromosome positive cells, injected into wild-type female recipients, were detected by FISH. Cross-validation confirmed the data obtained by in vivo/ex vivo TD optical imaging analysis. In summary, our data demonstrates the usefulness of NIR TD optical imaging in tracking delivered cells, giving insights into the migratory properties of the injected cells.
Chapman, L Kevin; Vines, Lauren; Petrie, Jenny
2011-05-01
The current study attempted a cross-validation of specific phobia domains in a community-based sample of African American adults based on a previous model of phobia domains in a college student sample of African Americans. Subjects were 100 African American community-dwelling adults who completed the Fear Survey Schedule-Second Edition (FSS-II). Domains of fear were created using a similar procedure as the original, college sample of African American adults. A model including all of the phobia domains from the FSS-II was initially tested and resulted in poor model fit. Cross-validation was subsequently attempted through examining the original factor pattern of specific phobia domains from the college sample (Chapman, Kertz, Zurlage, & Woodruff-Borden, 2008). Data from the current, community based sample of African American adults provided poor fit to this model. The trimmed model for the current sample included the animal and social anxiety factors as in the original model. The natural environment-type specific phobia factor did not provide adequate fit for the community-based sample of African Americans. Results indicated that although different factor loading patterns of fear may exist among community-based African Americans as compared to African American college students, both animal and social fears are nearly identical in both groups, indicating a possible cultural homogeneity for phobias in African Americans. Potential explanations of these findings and future directions are discussed.
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Link, William A; Sauer, John R
2016-07-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Luca eIani
2015-04-01
Full Text Available Objective: The Food Craving Questionnaire-Trait (FCQ-T is commonly used to assess habitual food cravings among individuals. Previous studies have shown that a brief version of this instrument (FCQ-T-r has good reliability and validity. This article is the first to use Confirmatory factor analysis to examine the psychometric properties of the FCQ-T-r in a cross-validation study.Method: Habitual food cravings, as well as emotion regulation strategies, affective states and disordered eating behaviors, were investigated in two independent samples of non-clinical adult volunteers (Sample 1: N = 368; Sample 2: N = 246. Confirmatory factor analyses were conducted to simultaneously test model fit statistics and dimensionality of the instrument. FCQ-T-r reliability was assessed by computing the composite reliability coefficient. Results: Analysis supported the unidimensional structure of the scale and fit indices were acceptable for both samples. The FCQ-T-r showed excellent reliability and moderate to high correlations with negative affect and disordered eating. Conclusions: Our results indicate that the FCQ-T-r scores can be reliably used to assess habitual cravings in an Italian non-clinical sample of adults. The robustness of these results is tested by a cross-validation of the model using two independent samples. Further research is required to expand on these findings, particularly in children and adolescents.
Cerebral blood flow measurement using fMRI and PET: a cross-validation study.
Chen, Jean J; Wieckowska, Marguerite; Meyer, Ernst; Pike, G Bruce
2008-01-01
An important aspect of functional magnetic resonance imaging (fMRI) is the study of brain hemodynamics, and MR arterial spin labeling (ASL) perfusion imaging has gained wide acceptance as a robust and noninvasive technique. However, the cerebral blood flow (CBF) measurements obtained with ASL fMRI have not been fully validated, particularly during global CBF modulations. We present a comparison of cerebral blood flow changes (DeltaCBF) measured using a flow-sensitive alternating inversion recovery (FAIR) ASL perfusion method to those obtained using H(2) (15)O PET, which is the current gold standard for in vivo imaging of CBF. To study regional and global CBF changes, a group of 10 healthy volunteers were imaged under identical experimental conditions during presentation of 5 levels of visual stimulation and one level of hypercapnia. The CBF changes were compared using 3 types of region-of-interest (ROI) masks. FAIR measurements of CBF changes were found to be slightly lower than those measured with PET (average DeltaCBF of 21.5 +/- 8.2% for FAIR versus 28.2 +/- 12.8% for PET at maximum stimulation intensity). Nonetheless, there was a strong correlation between measurements of the two modalities. Finally, a t-test comparison of the slopes of the linear fits of PET versus ASL DeltaCBF for all 3 ROI types indicated no significant difference from unity (P > .05).
Cerebral Blood Flow Measurement Using fMRI and PET: A Cross-Validation Study
Jean J. Chen
2008-01-01
Full Text Available An important aspect of functional magnetic resonance imaging (fMRI is the study of brain hemodynamics, and MR arterial spin labeling (ASL perfusion imaging has gained wide acceptance as a robust and noninvasive technique. However, the cerebral blood flow (CBF measurements obtained with ASL fMRI have not been fully validated, particularly during global CBF modulations. We present a comparison of cerebral blood flow changes (ΔCBF measured using a flow-sensitive alternating inversion recovery (FAIR ASL perfusion method to those obtained using H2O15 PET, which is the current gold standard for in vivo imaging of CBF. To study regional and global CBF changes, a group of 10 healthy volunteers were imaged under identical experimental conditions during presentation of 5 levels of visual stimulation and one level of hypercapnia. The CBF changes were compared using 3 types of region-of-interest (ROI masks. FAIR measurements of CBF changes were found to be slightly lower than those measured with PET (average ΔCBF of 21.5±8.2% for FAIR versus 28.2±12.8% for PET at maximum stimulation intensity. Nonetheless, there was a strong correlation between measurements of the two modalities. Finally, a t-test comparison of the slopes of the linear fits of PET versus ASL ΔCBF for all 3 ROI types indicated no significant difference from unity (P>.05.
Finn, Natalie K; Torres, Elisa M; Ehrhart, Mark G; Roesch, Scott C; Aarons, Gregory A
2016-08-01
The Implementation Leadership Scale (ILS) is a brief, pragmatic, and efficient measure that can be used for research or organizational development to assess leader behaviors and actions that actively support effective implementation of evidence-based practices (EBPs). The ILS was originally validated with mental health clinicians. This study validates the ILS factor structure with providers in community-based organizations (CBOs) providing child welfare services. Participants were 214 service providers working in 12 CBOs that provide child welfare services. All participants completed the ILS, reporting on their immediate supervisor. Confirmatory factor analyses were conducted to examine the factor structure of the ILS. Internal consistency reliability and measurement invariance were also examined. Confirmatory factor analyses showed acceptable fit to the hypothesized first- and second-order factor structure. Internal consistency reliability was strong and there was partial measurement invariance for the first-order factor structure when comparing child welfare and mental health samples. The results support the use of the ILS to assess leadership for implementation of EBPs in child welfare organizations.
Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen
2017-01-01
Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate
Adams, James; Howsmon, Daniel P; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen
2017-01-01
A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. "Leave-one-out" cross-validation was used to ensure statistical independence of results. Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate Speech), but significant associations were found
Cross-validation of methods used for analysis af MTBE and other gasoline components in groundwater
Lacorte, S.; Rosell, M.; Barcelo [Department of Environmental Chemistry, IIQAB-CSIC, Barcelona (Spain); Olivella, L.; Figueras, M.; Ginebreda, A. [Ministry of Environmental Affairs, Barcelona (Spain). Catalan Water Agency
2003-06-01
Head space gas chromatography with flame-ionization detection (HS-GC-FID), and purge and trap gas chromatography-mass spectrometry (P and T-GC-MS) have been used to determine methyl-tert-butyl ether (MTBE) and benzene, toluene, and the xylenes (BTEX) in groundwater. In the work discussed in this paper measures of quality, e.g. recovery (94-111%), precision (4.6-12.2%), limits of detection (0.3-5.7 {mu}g L{sup -1} for HS and 0.001 {mu}g L{sup -1} for PT), and robustness, for both methods were compared. In addition, for purposes of comparison, groundwater samples from areas suffering from odor problems because of fuel spillage and tank leakage were analyzed by use of both techniques. For high concentration levels there was good correlation between results from both methods. Results from P and T analysis showed that 20 of the 21 samples from the vulnerable areas contained MTBE at concentrations up to 666 mug L{sup -1}. Levels in seven samples exceeded maximum permissible levels for odor and taste set by the USEPA (20-40 {mu}g L{sup -1}); for thirteen of the samples levels were between 0.28 and 17.9 {mu}g L{sup -1}. The sensitivity of HS-GC-FID was, however, two to three orders of magnitude lower and concentrations of 6-10 {mu}g L{sup -1} could not always be detected, leading to false negatives. The same behavior was observed for analysis of BTEX - the lower sensitivity of HS-GC-FID and coelution of peaks led to results of poor reliability, and confirmation by GC-MS was always necessary. The applicability of two analytical methods widely used for routine monitoring of VOC thus depends on the organoleptic thresholds of MTBE and BTEX in groundwater (20 {mu}g L{sup -1}) and the need to survey trace concentrations of persistent MTBE in vulnerable aquifers. (orig.)
Calvet, Jean-Christophe; Barbu, Alina; Carrer, Dominique; Meurey, Catherine
2014-05-01
Long (more than 30 years) time series of satellite-derived products over land are now available. They concern Essential Climate Variables (ECV) such as LAI, FAPAR, surface albedo, and soil moisture. The direct validation of such Climate Data Records (CDR) is not easy, as in situ observations are limited in space and time. Therefore, indirect validation has a key role. It consists in comparing the products with similar preexisting products derived from satellite observations or from land surface model (LSM) simulations. The most advanced indirect validation technique consists in integrating the products into a LSM using a data assimilation scheme. The obtained reanalysis accounts for the synergies of the various upstream products and provides statistics which can be used to monitor the quality of the assimilated observations. Meteo-France develops the ISBA-A-gs generic LSM able to represent the diurnal cycle of the surface fluxes together with the seasonal, interannual and decadal variability of the vegetation biomass. The LSM is embedded in the SURFEX modeling platform together with a simplified extended Kalman filter. These tools form a Land Data Assimilation System (LDAS). The current version of the LDAS assimilates SPOT-VGT LAI and ASCAT surface soil moisture (SSM) products over France (8km x 8km), and a passive monitoring of albedo, FAPAR and Land Surface temperature (LST) is performed (i.e., the simulated values are compared with the satellite products). The LDAS-France system is used in the European Copernicus Global Land Service (http://land.copernicus.eu/global/) to monitor the quality of upstream products. The LDAS generates statistics whose trends can be analyzed in order to detect possible drifts in the quality of the products: (1) for LAI and SSM, metrics derived from the active monitoring (i.e. assimilation) such as innovations (observations vs. model forecast), residuals (observations vs. analysis), and increments (analysis vs. model forecast) ; (2
Martin Schecklmann
2015-01-01
Full Text Available Background. The aim of the present study was to assess the prevalence of insomnia in chronic tinnitus and the association of tinnitus distress and sleep disturbance. Methods. We retrospectively analysed data of 182 patients with chronic tinnitus who completed the Tinnitus Questionnaire (TQ and the Regensburg Insomnia Scale (RIS. Descriptive comparisons with the validation sample of the RIS including exclusively patients with primary/psychophysiological insomnia, correlation analyses of the RIS with TQ scales, and principal component analyses (PCA in the tinnitus sample were performed. TQ total score was corrected for the TQ sleep items. Results. Prevalence of insomnia was high in tinnitus patients (76% and tinnitus distress correlated with sleep disturbance (r=0.558. TQ sleep subscore correlated with the RIS sum score (r=0.690. PCA with all TQ and RIS items showed one sleep factor consisting of all RIS and the TQ sleep items. PCA with only TQ sleep and RIS items showed sleep- and tinnitus-specific factors. The sleep factors (only RIS items were sleep depth and fearful focusing. The TQ sleep items represented tinnitus-related sleep problems. Discussion. Chronic tinnitus and primary insomnia are highly related and might share similar psychological and neurophysiological mechanisms leading to impaired sleep quality.
Palic, Sabina; Kappel, Michelle Lind; Makransky, Guido
2016-01-01
Rasch analysis we evaluated the psychometrics of the Health of Nation Outcome Scales (HoNOS) in pre-treatment data of consecutive refugee patients (N = 448) from a Danish psychiatric clinic. Then, we carried out a cross-validation of the pre-treatment HoNOS model on post-treatment data from the same...... group. A revised 10-item HoNOS fit the Rasch model at pre-treatment, and also showed excellent fit within the cross-validation data. Culture, gender, and need for translation did not exert serious bias on the measure’s performance. The results establish good monitoring properties of the 10-item Ho...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Greubel, Jana; Arlinghaus, Anna; Nachreiner, Friedhelm; Lombardi, David A
2016-11-01
Replication and cross-validation of results on health and safety risks of work at unusual times. Data from two independent surveys (European Working Conditions Surveys 2005 and 2010; EU 2005: n = 23,934 and EU 2010: n = 35,187) were used to examine the relative risks of working at unusual times (evenings, Saturdays, and Sundays) on work-life balance, work-related health complaints, and occupational accidents using logistic regression while controlling for potential confounders such as demographics, work load, and shift work. For the EU 2005 survey, evening work was significantly associated with an increased risk of poor work-life balance (OR 1.69) and work-related health complaints (OR 1.14), Saturday work with poor work-life balance (OR 1.49) and occupational accidents (OR 1.34), and Sunday work with poor work-life balance (OR 1.15) and work-related health complaints (OR 1.17). For EU 2010, evening work was associated with poor work-life balance (OR 1.51) and work-related health complaints (OR 1.12), Saturday work with poor work-life balance (OR 1.60) and occupational accidents (OR 1.19) but a decrease in risk for work-related health complaints (OR 0.86) and Sunday work with work-related health complaints (OR 1.13). Risk estimates in both samples yielded largely similar results with comparable ORs and overlapping confidence intervals. Work at unusual times constitutes a considerable risk to social participation and health and showed structurally consistent effects over time and across samples.
Niemeijer, Anuschka S; van Waelvelde, Hilde; Smits-Engelsman, Bouwien C M
2015-02-01
The Movement Assessment Battery for Children has been revised as the Movement ABC-2 (Henderson, Sugden, & Barnett, 2007). In Europe, the 15th percentile score on this test is recommended for one of the DSM-IV diagnostic criteria for Developmental Coordination Disorder (DCD). A representative sample of Dutch and Flemish children was tested to cross-validate the UK standard scores, including the 15th percentile score. First, the mean, SD and percentile scores of Dutch children were compared to those of UK normative samples. Item standard scores of Dutch speaking children deviated from the UK reference values suggesting necessary adjustments. Except for very young children, the Dutch-speaking samples performed better. Second, based on the mean and SD and clinical relevant cut-off scores (5th and 15th percentile), norms were adjusted for the Dutch population. For diagnostic use, researchers and clinicians should use the reference norms that are valid for the group of children they are testing. The results indicate that there possibly is an effect of testing procedure in other countries that validated the UK norms and/or cultural influence on the age norms of the Movement ABC-2. It is suggested to formulate criterion-based norms for age groups in addition to statistical norms.
Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...
van Dam, D.; Ehring, T.; Vedel, E.; Emmelkamp, P.M.G.
2013-01-01
This study aimed to cross-validate earlier findings regarding the diagnostic efficiency of a modified version of the Primary Care Posttraumatic Stress Disorder (PC-PTSD) screening questionnaire (A. Prins, P. Ouimette, R. Kimerling, R. P. Cameron, D. S. Hugelshofer, J. Shaw-Hegwer, et al., 2004). The
Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark
2014-01-01
An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…
Blagus, Rok; Lusa, Lara
2015-11-04
Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily
Kubo Chiharu
2007-03-01
Full Text Available Abstract Background The construct validity of alexithymia and its assessment using the 20-item Toronto Alexithymia Scale (TAS-20 in Japan is unknown. Low reliability has been found for the third factor of the TAS-20 in some cultures, and the factor structure for psychosomatic disorder patients has not been adequately investigated. Although alexithymia most likely has certain developmental aspects, this has infrequently been investigated. Methods The newly-developed Japanese TAS-20 was administered to a normative sample (n = 2,718; 14–84 y.o., along with the NEO Five-Factor Inventory (NEO-FFI for cross validation. Psychosomatic patients (n = 1,924, 12–87 y.o. were tested to evaluate the factor structure in a clinical sample. College students (n = 196 were used for a test-retest study. Internal reliability and consistency were assessed, and the factorial structure was evaluated using confirmatory and exploratory factor analyses for both the normative and the clinical samples. The correlations between the TAS-20 and the NEO-FFI factor scores were evaluated. Age-related and gender differences in the TAS-20 were explored using analysis of variance in the normative sample. Results The original three-factor model of the TAS-20 was confirmed to be valid for these Japanese samples, although a 4-factor solution that included negatively keyed items (NKI as an additional factor was more effective. Significant correlations of the TAS-20 with the NEO-FFI were found, as has been previously reported. Factor analyses of the normative and patient samples showed similar patterns. The TAS-20 total, difficulty in identifying feelings (DIF, and difficulty in describing feelings (DDF scores were high for teenagers, decreased with age, and from 30s did not change significantly. In contrast, externally oriented thinking (EOT scores showed an almost linear positive correlation with age. DIF scores were higher for females, while EOT scores were higher for males
Haifeng Gao
2015-04-01
Full Text Available This research article analyzes the resonant reliability at the rotating speed of 6150.0 r/min for low-pressure compressor rotor blade. The aim is to improve the computational efficiency of reliability analysis. This study applies least squares support vector machine to predict the natural frequencies of the low-pressure compressor rotor blade considered. To build a more stable and reliable least squares support vector machine model, leave-one-out cross-validation is introduced to search for the optimal parameters of least squares support vector machine. Least squares support vector machine with leave-one-out cross-validation is presented to analyze the resonant reliability. Additionally, the modal analysis at the rotating speed of 6150.0 r/min for the rotor blade is considered as a tandem system to simplify the analysis and design process, and the randomness of influence factors on frequencies, such as material properties, structural dimension, and operating condition, is taken into consideration. Back-propagation neural network is compared to verify the proposed approach based on the same training and testing sets as least squares support vector machine with leave-one-out cross-validation. Finally, the statistical results prove that the proposed approach is considered to be effective and feasible and can be applied to structural reliability analysis.
Wagner Mateus Costa Melo
2014-01-01
Full Text Available The present study aimed to predict the performance of maize hybrids and assess whether the total effects of associated markers (TEAM method can correctly predict hybrids using cross-validation and regional trials. The training was performed in 7 locations of Southern Brazil during the 2010/11 harvest. The regional assays were conducted in 6 different South Brazilian locations during the 2011/12 harvest. In the training trial, 51 lines from different backgrounds were used to create 58 single cross hybrids. Seventy-nine microsatellite markers were used to genotype these 51 lines. In the cross-validation method the predictive accuracy ranged from 0.10 to 0.96, depending on the sample size. Furthermore, the accuracy was 0.30 when the values of hybrids that were not used in the training population (119 were predicted for the regional assays. Regarding selective loss, the TEAM method correctly predicted 50% of the hybrids selected in the regional assays. There was also loss in only 33% of cases; that is, only 33% of the materials predicted to be good in training trial were considered to be bad in regional assays. Our results show that the predictive validation of different crop conditions is possible, and the cross-validation results strikingly represented the field performance.
Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L
2015-08-18
Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Fraga, Ignacio; Cea, Luis; Puertas, Jerónimo; Salsón, Santiago; Petazzi, Alberto
2016-04-01
In this paper we present a new methodology to compute rainfall fields including the quantification of predictions uncertainties using raingauge network data. The proposed methodology comprises two steps. Firstly, the ordinary krigging technique is used to determine the estimated rainfall depth in every point of the study area. Then multiple equi-probable errors fields, which comprise both interpolation and measuring uncertainties, are added to the krigged field resulting in multiple rainfall predictions. To compute these error fields first the standard deviation of the krigging estimation is determined following the cross-validation based procedure described in Delrieu et al. (2014). Then, the standard deviation field is sampled using non-conditioned Gaussian random fields. The proposed methodology was applied to study 7 rain events in a 60x60 km area of the west coast of Galicia, in the Northwest of Spain. Due to its location at the junction between tropical and polar regions, the study area suffers from frequent intense rainfalls characterized by a great variability in terms of both space and time. Rainfall data from the tipping bucket raingauge network operated by MeteoGalicia were used to estimate the rainfall fields using the proposed methodology. The obtained predictions were then validated using rainfall data from 3 additional rain gauges installed within the CAPRI project (Probabilistic flood prediction with high resolution hydrologic models from radar rainfall estimates, funded by the Spanish Ministry of Economy and Competitiveness. Reference CGL2013-46245-R.). Results show that both the mean hyetographs and the peak intensities are correctly predicted. The computed hyetographs present a good fit to the experimental data and most of the measured values fall within the 95% confidence intervals. Also, most of the experimental values outside the confidence bounds correspond to time periods of low rainfall depths, where the inaccuracy of the measuring devices
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R
2016-06-01
Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Chip Multithreaded Consistency Model
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Tvedskov, T F; Meretoja, T J; Jensen, M B
2014-01-01
BACKGROUND: We cross-validated three existing models for the prediction of non-sentinel node metastases in patients with micrometastases or isolated tumor cells (ITC) in the sentinel node, developed in Danish and Finnish cohorts of breast cancer patients, to find the best model to identify patients...... who might benefit from further axillary treatment. MATERIAL AND METHOD: Based on 484 Finnish breast cancer patients with micrometastases or ITC in sentinel node a model has been developed for the prediction of non-sentinel node metastases. Likewise, two separate models have been developed in 1577...... metastases while less than 1% was identified by the Finish model. In contrast, the Finish model predicted a much larger proportion of patients being in the low-risk group with less than 10% risk of non-sentinel node metastases. CONCLUSION: The Danish model for micrometastases worked well in predicting high...
Zhou, Xiao Dong; Pederson, Larry R.; Templeton, Jared W.; Stevenson, Jeffry W.
2009-12-09
The aim of this paper is to address three issues in solid oxide fuel cells: (1) cross-validation of the polarization of a single cell measured using both dc and ac approaches, (2) the precise determination of the total areal specific resistance (ASR), and (3) understanding cathode polarization with LSCF cathodes. The ASR of a solid oxide fuel cell is a dynamic property, meaning that it changes with current density. The ASR measured using ac impedance spectroscopy (low frequency interception with real Z´ axis of ac impedance spectrum) matches with that measured from a dc IV sweep (the tangent of dc i-V curve). Due to the dynamic nature of ASR, we found that an ac impedance spectrum measured under open circuit voltage or on a half cell may not represent cathode performance under real operating conditions, particularly at high current density. In this work, the electrode polarization was governed by the cathode activation polarization; the anode contribution was negligible.
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Valencia Mauro E
2007-08-01
Full Text Available Abstract Background The study of body composition in specific populations by techniques such as bio-impedance analysis (BIA requires validation based on standard reference methods. The aim of this study was to develop and cross-validate a predictive equation for bioelectrical impedance using air displacement plethysmography (ADP as standard method to measure body composition in Mexican adult men and women. Methods This study included 155 male and female subjects from northern Mexico, 20–50 years of age, from low, middle, and upper income levels. Body composition was measured by ADP. Body weight (BW, kg and height (Ht, cm were obtained by standard anthropometric techniques. Resistance, R (ohms and reactance, Xc (ohms were also measured. A random-split method was used to obtain two samples: one was used to derive the equation by the "all possible regressions" procedure and was cross-validated in the other sample to test predicted versus measured values of fat-free mass (FFM. Results and Discussion The final model was: FFM (kg = 0.7374 * (Ht2 /R + 0.1763 * (BW - 0.1773 * (Age + 0.1198 * (Xc - 2.4658. R2 was 0.97; the square root of the mean square error (SRMSE was 1.99 kg, and the pure error (PE was 2.96. There was no difference between FFM predicted by the new equation (48.57 ± 10.9 kg and that measured by ADP (48.43 ± 11.3 kg. The new equation did not differ from the line of identity, had a high R2 and a low SRMSE, and showed no significant bias (0.87 ± 2.84 kg. Conclusion The new bioelectrical impedance equation based on the two-compartment model (2C was accurate, precise, and free of bias. This equation can be used to assess body composition and nutritional status in populations similar in anthropometric and physical characteristics to this sample.
Pradhan, Biswajeet
2010-05-01
This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross
Xu, Zheng; Schrama, Ernst J. O.; van der Wal, Wouter; van den Broeke, Michiel; Enderlin, Ellyn M.
2016-04-01
In this study, we use satellite gravimetry data from the Gravity Recovery and Climate Experiment (GRACE) to estimate regional mass change of the Greenland ice sheet (GrIS) and neighboring glaciated regions using a least squares inversion approach. We also consider results from the input-output method (IOM). The IOM quantifies the difference between the mass input and output of the GrIS by studying the surface mass balance (SMB) and the ice discharge (D). We use the Regional Atmospheric Climate Model version 2.3 (RACMO2.3) to model the SMB and derive the ice discharge from 12 years of high-precision ice velocity and thickness surveys. We use a simulation model to quantify and correct for GRACE approximation errors in mass change between different subregions of the GrIS, and investigate the reliability of pre-1990s ice discharge estimates, which are based on the modeled runoff. We find that the difference between the IOM and our improved GRACE mass change estimates is reduced in terms of the long-term mass change when using a reference discharge derived from runoff estimates in several subareas. In most regions our GRACE and IOM solutions are consistent with other studies, but differences remain in the northwestern GrIS. We validate the GRACE mass balance in that region by considering several different GIA models and mass change estimates derived from data obtained by the Ice, Cloud and land Elevation Satellite (ICESat). We conclude that the approximated mass balance between GRACE and IOM is consistent in most GrIS regions. The difference in the northwest is likely due to underestimated uncertainties in the IOM solutions.
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
No consistent bimetric gravity?
Deser, S; Waldron, A
2013-01-01
We discuss the prospects for a consistent, nonlinear, partially massless (PM), gauge symmetry of bimetric gravity (BMG). Just as for single metric massive gravity, ultimate consistency of both BMG and the putative PM BMG theory relies crucially on this gauge symmetry. We argue, however, that it does not exist.
Jacob A. Laros
2004-04-01
Full Text Available A estabilidade da estrutura fatorial de uma escala de clima organizacional com 66 itens foi investigada. A amostra de 61.349 respondentes foi dividida aleatoriamente em duas partes, a primeira para identificação da estrutura fatorial, a segunda para verificação da sua replicabilidade. O critério adotado para identificar o número de fatores resultou na extração de sete fatores. Para obter uma estrutura fatorial satisfatória 23 itens foram eliminados. Uma segunda análise fatorial dos 43 itens restantes indicou sete fatores explicando 63,4% da variância. Uma análise fatorial de segunda ordem revelou um fator geral explicando 55,5% da variância. Para verificar a estabilidade da estrutura fatorial, os mesmos procedimentos e critérios foram empregados na segunda amostra. Os resultados indicam uma grande estabilidade da estrutura fatorial hierárquica da escala de clima organizacional, com sete fatores de primeira ordem e um fator geral de segunda ordem.The stability of the factor structure of a scale of organizational climate of 66 items was investigated. A sample of 61,349 respondents was randomly divided in two parts, the first to identify the factor structure, the second to verify its replicability. The criterion used to identify the number of factors resulted in the retention of seven factors. To obtain a satisfactory factor structure 23 items were excluded. A second factor analysis of the remaining 43 items indicated seven factors explaining 63.4% of the variance. Second-order factor analysis revealed one general factor accounting for 55.5% of the variance. To investigate the stability of the factor structure the same procedures and criteria were used with the second sample. The results indicate a high stability of the hierarchical factor structure of the scale of organizational climate consisting of seven first-order factors and one general second-order factor.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Hiscock, S.
1986-07-01
The importance of consistency in coal quality has become of increasing significance recently, with the current trend towards using coal from a range of sources. A significant development has been the swing in responsibilities for coal quality. The increasing demand for consistency in quality has led to a re-examination of where in the trade and transport chain the quality should be assessed and where further upgrading of inspection and preparation facilities are required. Changes are in progress throughout the whole coal transport chain which will improve consistency of delivered coal quality. These include installation of beneficiation plant at coal mines, export terminals, and on the premises of end users. It is suggested that one of the keys to success for the coal industry will be the ability to provide coal of a consistent quality.
Kent, A
1996-01-01
In the consistent histories formulation of quantum theory, the probabilistic predictions and retrodictions made from observed data depend on the choice of a consistent set. We show that this freedom allows the formalism to retrodict several contradictory propositions which correspond to orthogonal commuting projections and which all have probability one. We also show that the formalism makes contradictory probability one predictions when applied to generalised time-symmetric quantum mechanics.
Minimum Grading, Maximum Learning
Carey, Theodore; Carifio, James
2011-01-01
Fair and effective schools should assign grades that align with clear and consistent evidence of student performance (Wormeli, 2006), but when a student's performance is inconsistent, traditional grading practices can prove inadequate. Understanding this, increasing numbers of schools have been experimenting with the practice of assigning minimum…
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Sjoerds, Zsuzsika; Dietrich, Anja; Deserno, Lorenz; de Wit, Sanne; Villringer, Arno; Heinze, Hans-Jochen; Schlagenhauf, Florian; Horstmann, Annette
2016-01-01
Instrumental learning and decision-making rely on two parallel systems: a goal-directed and a habitual system. In the past decade, several paradigms have been developed to study these systems in animals and humans by means of e.g., overtraining, devaluation procedures and sequential decision-making. These different paradigms are thought to measure the same constructs, but cross-validation has rarely been investigated. In this study we compared two widely used paradigms that assess aspects of goal-directed and habitual behavior. We correlated parameters from a two-step sequential decision-making task that assesses model-based (MB) and model-free (MF) learning with a slips-of-action paradigm that assesses the ability to suppress cue-triggered, learnt responses when the outcome has been devalued and is therefore no longer desirable. MB control during the two-step task showed a very moderately positive correlation with goal-directed devaluation sensitivity, whereas MF control did not show any associations. Interestingly, parameter estimates of MB and goal-directed behavior in the two tasks were positively correlated with higher-order cognitive measures (e.g., visual short-term memory). These cognitive measures seemed to (at least partly) mediate the association between MB control during sequential decision-making and goal-directed behavior after instructed devaluation. This study provides moderate support for a common framework to describe the propensity towards goal-directed behavior as measured with two frequently used tasks. However, we have to caution that the amount of shared variance between the goal-directed and MB system in both tasks was rather low, suggesting that each task does also pick up distinct aspects of goal-directed behavior. Further investigation of the commonalities and differences between the MF and habit systems as measured with these, and other, tasks is needed. Also, a follow-up cross-validation on the neural systems driving these constructs
Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya
2014-01-01
We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce
Zsuzsika Sjoerds
2016-12-01
Full Text Available Instrumental learning and decision-making rely on two parallel systems: a goal-directed and a habitual system. In the past decade, several paradigms have been developed to study these systems in animals and humans by means of e.g. overtraining, devaluation procedures and sequential decision-making. These different paradigms are thought to measure the same constructs, but cross-validation has rarely been investigated. In this study we compared two widely used paradigms that assess aspects of goal-directed and habitual behavior. We correlated parameters from a two-step sequential decision-making task that assesses model-based and model-free learning with a slips-of-action paradigm that assesses the ability to suppress cue-triggered, learnt responses when the outcome has been devalued and is therefore no longer desirable. Model-based control during the two-step task showed a very moderately positive correlation with goal-directed devaluation sensitivity, whereas model-free control did not. Interestingly, parameter estimates of model-based and goal-directed behavior in the two tasks were positively correlated with higher-order cognitive measures (e.g. visual short-term memory. These cognitive measures seemed to (at least partly mediate the association between model-based control during sequential decision-making and goal-directed behavior after instructed devaluation. This study provides moderate support for a common framework to describe the propensity towards goal-directed behavior as measured with two frequently used tasks. However, we have to caution that the amount of shared variance between the goal-directed and model-based system in both tasks was rather low, suggesting that each task does also pick up distinct aspects of goal-directed behavior. Further investigation of the commonalities and differences between the model-free and habit systems as measured with these, and other, tasks is needed. Also, a follow-up cross-validation on the neural
Plassard, Andrew J; Kelly, Patrick D; Asman, Andrew J; Kang, Hakmook; Patel, Mayur B; Landman, Bennett A
2015-03-20
Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold cross-validation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R(2) to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.
Network Consistent Data Association.
Chakraborty, Anirban; Das, Abir; Roy-Chowdhury, Amit K
2016-09-01
Existing data association techniques mostly focus on matching pairs of data-point sets and then repeating this process along space-time to achieve long term correspondences. However, in many problems such as person re-identification, a set of data-points may be observed at multiple spatio-temporal locations and/or by multiple agents in a network and simply combining the local pairwise association results between sets of data-points often leads to inconsistencies over the global space-time horizons. In this paper, we propose a Novel Network Consistent Data Association (NCDA) framework formulated as an optimization problem that not only maintains consistency in association results across the network, but also improves the pairwise data association accuracies. The proposed NCDA can be solved as a binary integer program leading to a globally optimal solution and is capable of handling the challenging data-association scenario where the number of data-points varies across different sets of instances in the network. We also present an online implementation of NCDA method that can dynamically associate new observations to already observed data-points in an iterative fashion, while maintaining network consistency. We have tested both the batch and the online NCDA in two application areas-person re-identification and spatio-temporal cell tracking and observed consistent and highly accurate data association results in all the cases.
Tulsky, David S; Price, Larry R
2003-06-01
During the standardization of the Wechsler Adult Intelligence Scale (3rd ed.; WAIS-III) and the Wechsler Memory Scale (3rd ed.; WMS-III) the participants in the normative study completed both scales. This "co-norming" methodology set the stage for full integration of the 2 tests and the development of an expanded structure of cognitive functioning. Until now, however, the WAIS-III and WMS-III had not been examined together in a factor analytic study. This article presents a series of confirmatory factor analyses to determine the joint WAIS-III and WMS-III factor structure. Using a structural equation modeling approach, a 6-factor model that included verbal, perceptual, processing speed, working memory, auditory memory, and visual memory constructs provided the best model fit to the data. Allowing select subtests to load simultaneously on 2 factors improved model fit and indicated that some subtests are multifaceted. The results were then replicated in a large cross-validation sample (N = 858).
Hilsabeck, Robin C; Thompson, Matthew D; Irby, James W; Adams, Russell L; Scott, James G; Gouvier, Wm Drew
2003-01-01
The Wechsler Memory Scale-Revised (WMS-R) malingering indices proposed by Mittenberg, Azrin, Millsaps, and Heilbronner [Psychol Assess 5 (1993) 34.] were partially cross-validated in a sample of 200 nonlitigants. Nine diagnostic categories were examined, including participants with traumatic brain injury (TBI), brain tumor, stroke/vascular, senile dementia of the Alzheimer's type (SDAT), epilepsy, depression/anxiety, medical problems, and no diagnosis. Results showed that the discriminant function using WMS-R subtests misclassified only 6.5% of the sample as malingering, with significantly higher misclassification rates of SDAT and stroke/vascular groups. The General Memory Index-Attention/Concentration Index (GMI-ACI) difference score misclassified only 8.5% of the sample as malingering when a difference score of greater than 25 points was used as the cutoff criterion. No diagnostic group was significantly more likely to be misclassified. Results support the utility of the GMI-ACI difference score, as well as the WMS-R subtest discriminant function score, in detecting malingering.
Metwalli, Nader S; Hu, Xiaoping P; Carew, John D
2010-09-01
Q-ball imaging (QBI) is a high angular resolution diffusion-weighted imaging (HARDI) technique for reconstructing the orientation distribution function (ODF). Some form of smoothing or regularization is typically required in the ODF reconstruction from low signal-to-noise ratio HARDI data. The amount of smoothing or regularization is usually set a priori at the discretion of the investigator. In this article, we apply an adaptive and objective means of smoothing the raw HARDI data using the smoothing splines on the sphere method with generalized cross-validation (GCV) to estimate the diffusivity profile in each voxel. Subsequently, we reconstruct the ODF, from the smoothed data, based on the Funk-Radon transform (FRT) used in QBI. The spline method was applied to both simulated data and in vivo human brain data. Simulated data show that the smoothing splines on the sphere method with GCV smoothing reduces the mean squared error in estimates of the ODF as compared with the standard analytical QBI approach. The human data demonstrate the utility of the method for estimating smooth ODFs.
Barette, Julia; Velyvis, Algirdas; Religa, Tomasz L; Korzhnev, Dmitry M; Kay, Lewis E
2012-06-14
We have recently reported the atomic resolution structure of a low populated and transiently formed on-pathway folding intermediate of the FF domain from human HYPA/FBP11 [Korzhnev, D. M.; Religa, T. L.; Banachewicz, W.; Fersht, A. R.; Kay, L.E. Science 2011, 329, 1312-1316]. The structure was determined on the basis of backbone chemical shift and bond vector orientation restraints of the invisible intermediate state measured using relaxation dispersion nuclear magnetic resonance (NMR) spectroscopy that were subsequently input into the database structure determination program, CS-Rosetta. As a cross-validation of the structure so produced, we present here the solution structure of a mimic of the folding intermediate that is highly populated in solution, obtained from the wild-type domain by mutagenesis that destabilizes the native state. The relaxation dispersion/CS-Rosetta structures of the intermediate are within 2 Å of those of the mimic, with the nonnative interactions in the intermediate also observed in the mimic. This strongly confirms the structure of the FF domain folding intermediate, in particular, and validates the use of relaxation dispersion derived restraints in structural studies of invisible excited states, in general.
van Dam, Debora; Ehring, Thomas; Vedel, Ellen; Emmelkamp, Paul M G
2013-01-01
This study aimed to cross-validate earlier findings regarding the diagnostic efficiency of a modified version of the Primary Care Posttraumatic Stress Disorder (PC-PTSD) screening questionnaire (A. Prins, P. Ouimette, R. Kimerling, R. P. Cameron, D. S. Hugelshofer, J. Shaw-Hegwer, et al., 2004). The PC-PTSD is a four-item screening questionnaire for Posttraumatic Stress Disorder (PTSD). Based on former research, we adapted the PC-PTSD for use among civilian substance use disorder (SUD) patients (D. Van Dam, T. Ehring, E. Vedel, & P. M. G. Emmelkamp, 2010). This version will be referred to as the Jellinek-PTSD (J-PTSD) screening questionnaire. Results showed a high sensitivity (.87), specificity (.75), and overall efficiency (.77) of the J-PTSD in detecting PTSD when using a cutoff score of 2. This confirms findings in former research, and suggests that the J-PTSD is a useful screening instrument for PTSD within a civilian SUD population. Both PTSD and SUD are severe and disabling disorders causing great psychological distress. An early recognition of PTSD among SUD patients makes it possible to address PTSD symptoms in time, which may ultimately lead to an improvement of symptoms in this complex patient group. Copyright © 2013 Elsevier Inc. All rights reserved.
Wang, Hui; Zhu, Junfeng; Reuter, Martin; Vinke, Louis N; Yendiki, Anastasia; Boas, David A; Fischl, Bruce; Akkin, Taner
2014-10-15
We established a strategy to perform cross-validation of serial optical coherence scanner imaging (SOCS) and diffusion tensor imaging (DTI) on a postmortem human medulla. Following DTI, the sample was serially scanned by SOCS, which integrates a vibratome slicer and a multi-contrast optical coherence tomography rig for large-scale three-dimensional imaging at microscopic resolution. The DTI dataset was registered to the SOCS space. An average correlation coefficient of 0.9 was found between the co-registered fiber maps constructed by fractional anisotropy and retardance contrasts. Pixelwise comparison of fiber orientations demonstrated good agreement between the DTI and SOCS measures. Details of the comparison were studied in regions exhibiting a variety of fiber organizations. DTI estimated the preferential orientation of small fiber tracts; however, it didn't capture their complex patterns as SOCS did. In terms of resolution and imaging depth, SOCS and DTI complement each other, and open new avenues for cross-modality investigations of the brain.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Thomsen, Christa; Nielsen, Anne Ellerup
2006-01-01
of a case study showing that companies use different and not necessarily consistent strategies for reporting on CSR. Finally, the implications for managerial practice are discussed. The chapter concludes by highlighting the value and awareness of the discourse and the discourse types adopted......This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...
A Magnetic Consistency Relation
Jain, Rajeev Kumar
2012-01-01
If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the Cosmic Microwave Background anisotropies and Large Scale Structure. Within an archetypical model of inflationary magnetogenesis, we show that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields.
Consistency in Distributed Systems
Kemme, Bettina; Ramalingam, Ganesan; Schiper, André; Shapiro, Marc; Vaswani, Kapil
2013-01-01
International audience; In distributed systems, there exists a fundamental trade-off between data consistency, availability, and the ability to tolerate failures. This trade-off has significant implications on the design of the entire distributed computing infrastructure such as storage systems, compilers and runtimes, application development frameworks and programming languages. Unfortunately, it also has significant, and poorly understood, implications for the designers and developers of en...
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Consistent wind Facilitates Vection
Masaki Ogawa
2011-10-01
Full Text Available We examined whether a consistent haptic cue suggesting forward self-motion facilitated vection. We used a fan with no blades (Dyson, AM01 providing a wind of constant strength and direction (wind speed was 6.37 m/s to the subjects' faces with the visual stimuli visible through the fan. We used an optic flow of expansion or contraction created by positioning 16,000 dots at random inside a simulated cube (length 20 m, and moving the observer's viewpoint to simulate forward or backward self-motion of 16 m/s. we tested three conditions for fan operation, which were normal operation, normal operation with the fan reversed (ie, no wind, and no operation (no wind and no sound. Vection was facilitated by the wind (shorter latency, longer duration and larger magnitude values with the expansion stimuli. The fan noise did not facilitate vection. The wind neither facilitated nor inhibited vection with the contraction stimuli, perhaps because a headwind is not consistent with backward self-motion. We speculate that the consistency between multi modalities is a key factor in facilitating vection.
Infanticide and moral consistency.
McMahan, Jeff
2013-05-01
The aim of this essay is to show that there are no easy options for those who are disturbed by the suggestion that infanticide may on occasion be morally permissible. The belief that infanticide is always wrong is doubtfully compatible with a range of widely shared moral beliefs that underlie various commonly accepted practices. Any set of beliefs about the morality of abortion, infanticide and the killing of animals that is internally consistent and even minimally credible will therefore unavoidably contain some beliefs that are counterintuitive.
Serfon, Cedric; The ATLAS collaboration
2016-01-01
One of the biggest challenge with Large scale data management system is to ensure the consistency between the global file catalog and what is physically on all storage elements. To tackle this issue, the Rucio software which is used by the ATLAS Distributed Data Management system has been extended to automatically handle lost or unregistered files (aka Dark Data). This system automatically detects these inconsistencies and take actions like recovery or deletion of unneeded files in a central manner. In this talk, we will present this system, explain the internals and give some results.
When is holography consistent?
McInnes, Brett, E-mail: matmcinn@nus.edu.sg [National University of Singapore (Singapore); Ong, Yen Chin, E-mail: yenchin.ong@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)
2015-09-15
Holographic duality relates two radically different kinds of theory: one with gravity, one without. The very existence of such an equivalence imposes strong consistency conditions which are, in the nature of the case, hard to satisfy. Recently a particularly deep condition of this kind, relating the minimum of a probe brane action to a gravitational bulk action (in a Euclidean formulation), has been recognized; and the question arises as to the circumstances under which it, and its Lorentzian counterpart, is satisfied. We discuss the fact that there are physically interesting situations in which one or both versions might, in principle, not be satisfied. These arise in two distinct circumstances: first, when the bulk is not an Einstein manifold and, second, in the presence of angular momentum. Focusing on the application of holography to the quark–gluon plasma (of the various forms arising in the early Universe and in heavy-ion collisions), we find that these potential violations never actually occur. This suggests that the consistency condition is a “law of physics” expressing a particular aspect of holography.
Consistent quantum measurements
Griffiths, Robert B.
2015-11-01
In response to recent criticisms by Okon and Sudarsky, various aspects of the consistent histories (CH) resolution of the quantum measurement problem(s) are discussed using a simple Stern-Gerlach device, and compared with the alternative approaches to the measurement problem provided by spontaneous localization (GRW), Bohmian mechanics, many worlds, and standard (textbook) quantum mechanics. Among these CH is unique in solving the second measurement problem: inferring from the measurement outcome a property of the measured system at a time before the measurement took place, as is done routinely by experimental physicists. The main respect in which CH differs from other quantum interpretations is in allowing multiple stochastic descriptions of a given measurement situation, from which one (or more) can be selected on the basis of its utility. This requires abandoning a principle (termed unicity), central to classical physics, that at any instant of time there is only a single correct description of the world.
Holl, Gerrit; Walker, Kaley A.; Conway, Stephanie; Saitoh, Naoko; Boone, Chris D.; Strong, Kimberly; Drummond, James R.
2016-05-01
We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three data sets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier transform spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier transform infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Laboratory at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional collocation criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and
Castro, F Javier Sanchez; Pollo, Claudio; Meuli, Reto; Maeder, Philippe; Cuisenaire, Olivier; Cuadra, Meritxell Bach; Villemure, Jean-Guy; Thiran, Jean-Philippe
2006-11-01
Validation of image registration algorithms is a difficult task and open-ended problem, usually application-dependent. In this paper, we focus on deep brain stimulation (DBS) targeting for the treatment of movement disorders like Parkinson's disease and essential tremor. DBS involves implantation of an electrode deep inside the brain to electrically stimulate specific areas shutting down the disease's symptoms. The subthalamic nucleus (STN) has turned out to be the optimal target for this kind of surgery. Unfortunately, the STN is in general not clearly distinguishable in common medical imaging modalities. Usual techniques to infer its location are the use of anatomical atlases and visible surrounding landmarks. Surgeons have to adjust the electrode intraoperatively using electrophysiological recordings and macrostimulation tests. We constructed a ground truth derived from specific patients whose STNs are clearly visible on magnetic resonance (MR) T2-weighted images. A patient is chosen as atlas both for the right and left sides. Then, by registering each patient with the atlas using different methods, several estimations of the STN location are obtained. Two studies are driven using our proposed validation scheme. First, a comparison between different atlas-based and nonrigid registration algorithms with a evaluation of their performance and usability to locate the STN automatically. Second, a study of which visible surrounding structures influence the STN location. The two studies are cross validated between them and against expert's variability. Using this scheme, we evaluated the expert's ability against the estimation error provided by the tested algorithms and we demonstrated that automatic STN targeting is possible and as accurate as the expert-driven techniques currently used. We also show which structures have to be taken into account to accurately estimate the STN location.
Finkelman, Matthew D; Jamison, Robert N; Kulich, Ronald J; Butler, Stephen F; Jackson, William C; Smits, Niels; Weiner, Scott G
2017-09-01
The Screener and Opioid Assessment for Patients with Pain-Revised (SOAPP-R) is a 24-item assessment designed to assist in the prediction of aberrant drug-related behavior (ADB) among patients with chronic pain. Recent work has created shorter versions of the SOAPP-R, including a static 12-item short form and two computer-based methods (curtailment and stochastic curtailment) that monitor assessments in progress. The purpose of this study was to cross-validate these shorter versions in two new populations. This retrospective study used data from patients recruited from a hospital-based pain center (n=84) and pain patients followed and treated at primary care centers (n=110). Subjects had been administered the SOAPP-R and assessed for ADB. In real-data simulation, the sensitivity, specificity, and area under the curve (AUC) of each form were calculated, as was the mean test length using curtailment and stochastic curtailment. Curtailment reduced the number of items administered by 30% to 34% while maintaining sensitivity and specificity identical to those of the full-length SOAPP-R. Stochastic curtailment reduced the number of items administered by 45% to 63% while maintaining sensitivity and specificity within 0.03 of those of the full-length SOAPP-R. The AUC of the 12-item form was equal to that of the 24-item form in both populations. Curtailment, stochastic curtailment, and the 12-item short form have potential to enhance the efficiency of the SOAPP-R. Copyright © 2017 Elsevier B.V. All rights reserved.
Koehler, Sara R; Dhaher, Yasin Y; Hansen, Andrew H
2014-04-11
The iPecs load cell is a lightweight, six-degree-of-freedom force transducer designed to fit easily into an endoskeletal prosthesis via a universal mounting interface. Unlike earlier tethered systems, it is capable of wireless data transmission and on-board memory storage, which facilitate its use in both clinical and real-world settings. To date, however, the validity of the iPecs load cell has not been rigorously established, particularly for loading conditions that represent typical prosthesis use. The aim of this study was to assess the accuracy of an iPecs load cell during in situ human subject testing by cross-validating its force and moment measurements with those of a typical gait analysis laboratory. Specifically, the gait mechanics of a single person with transtibial amputation were simultaneously measured using an iPecs load cell, multiple floor-mounted force platforms, and a three-dimensional motion capture system. Overall, the forces and moments measured by the iPecs were highly correlated with those measured by the gait analysis laboratory (r>0.86) and RMSEs were less than 3.4% and 5.2% full scale output across all force and moment channels, respectively. Despite this favorable comparison, however, the results of a sensitivity analysis suggest that care should be taken to accurately identify the axes and instrumentation center of the load cell in situations where iPecs data will be interpreted in a coordinate system other than its own (e.g., inverse dynamics analysis).
Consistent Stochastic Modelling of Meteocean Design Parameters
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
When Is Holography Consistent?
McInnes, Brett
2015-01-01
Holographic duality relates two radically different kinds of theory: one with gravity, one without. The very existence of such an equivalence imposes strong consistency conditions which are, in the nature of the case, hard to satisfy. Recently a particularly deep condition of this kind, relating the minimum of a probe brane action to a gravitational bulk action (in a Euclidean formulation), has been recognised; and the question arises as to the circumstances under which it, and its Lorentzian counterpart, are satisfied. We discuss the fact that there are physically interesting situations in which one or both versions might, in principle, \\emph{not} be satisfied. These arise in two distinct circumstances: first, when the bulk is not an Einstein manifold, and, second, in the presence of angular momentum. Focusing on the application of holography to the quark-gluon plasma (of the various forms arising in the early Universe and in heavy-ion collisions), we find that these potential violations never actually occur...
Proteolysis and consistency of Meshanger cheese
Jong, de L.
1978-01-01
Proteolysis in Meshanger cheese, estimated by quantitative polyacrylamide gel electrophoresis is discussed. The conversion of α _{s1} -casein was proportional to rennet concentration in the cheese. Changes in consistency, after a maximum, were correlated to breakdown of
Jairo Alberto Rueda Restrepo
2009-12-01
Full Text Available Una de las principales preocupaciones de los fitomejoradores es la evaluación de la estabilidad fenotípica mediante la realización de pruebas regionales o multiambiente. Existen numerosos métodos propuestos para el análisis de estas pruebas regionales y la estimación de la estabilidad fenotípica. En este trabajo se compara el método de regresión propuesto por Eberhart y Russell y el de componentes de varianza propuesto por Shukla, siguiendo un esquema de validación cruzada. Para ello fueron utilizados datos provenientes de 20 pruebas multiambiente de maíz, cada una con nueve genotipos, plantadas bajo un diseño en bloques completos al azar con cuatro repeticiones. Se encontró que el mejor modelo para predecir el rendimiento futuro de un genotipo en un determinado ambiente es el método de Eberhart y Russell, presentando un valor de raíz cuadrada del cuadrado medio de predicción 2,21% menos que el método de Shukla, con una consistencia en la predicción de 90,6%.One of the most important topics of plant breeders is to evaluate the phenotypic stability through regional trials or multi-environment trials. There are many methods proposed to analyze those trials and to estimate the phenotypic stability. This paper compares the regression method proposed by Eberhart and Russell and the components of variance proposed by Shukla, according to a cross-validation methodology. Data from 20 multi-environment corn tests, each one with nine genotypes, planted under a randomized complete block design with four replications, were used. It was found that the best model to predict the future performance of a genotype is the method of Eberhart and Russell, showing a root square value of the prediction medium 2,21% less than Shukla´s method , which a prediction consistence of 90.6%.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Schläppy, Romain; Eckert, Nicolas; Jomelli, Vincent; Grancher, Delphine; Brunstein, Daniel; Stoffel, Markus; Naaim, Mohamed
2013-04-01
rare events, i.e. to the tail of the local runout distance distribution. Furthermore, a good agreement exists with the statistical-numerical model's prediction, i.e. a 10-40 m difference for return periods ranging between 10 and 300 years, which is rather small with regards to the uncertainty levels to be considered in avalanche probabilistic modeling and dendrochronological reconstructions. It is important to note that such a cross validation on independent extreme predictions has never been undertaken before. It suggest that i) dendrochronological reconstruction can provide valuable information for anticipating future extreme avalanche events in the context of risk management, and, in turn, that ii) the statistical-numerical model, while properly calibrated, can be used with reasonable confidence to refine these predictions, with for instance evaluation of pressure and flow depth distributions at each position of the runout zone. A strong sensitivity to the determination of local avalanche and dendrological record frequencies is however highlighted, indicating that this step is an essential step for an accurate probabilistic characterization of large-extent events.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
The early maximum likelihood estimation model of audiovisual integration in speech perception
Andersen, Tobias
2015-01-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
Phylogenetic prediction of the maximum per capita rate of population growth.
Fagan, William F; Pearson, Yanthe E; Larsen, Elise A; Lynch, Heather J; Turner, Jessica B; Staver, Hilary; Noble, Andrew E; Bewick, Sharon; Goldberg, Emma E
2013-07-22
The maximum per capita rate of population growth, r, is a central measure of population biology. However, researchers can only directly calculate r when adequate time series, life tables and similar datasets are available. We instead view r as an evolvable, synthetic life-history trait and use comparative phylogenetic approaches to predict r for poorly known species. Combining molecular phylogenies, life-history trait data and stochastic macroevolutionary models, we predicted r for mammals of the Caniformia and Cervidae. Cross-validation analyses demonstrated that, even with sparse life-history data, comparative methods estimated r well and outperformed models based on body mass. Values of r predicted via comparative methods were in strong rank agreement with observed values and reduced mean prediction errors by approximately 68 per cent compared with two null models. We demonstrate the utility of our method by estimating r for 102 extant species in these mammal groups with unknown life-history traits.
Consistent estimation of Gibbs energy using component contributions.
Noor, Elad; Haraldsdóttir, Hulda S; Milo, Ron; Fleming, Ronan M T
2013-01-01
Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.
Consistent estimation of Gibbs energy using component contributions.
Elad Noor
Full Text Available Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
曾明基 Ming-Chi Tseng
2011-09-01
Full Text Available 研究主要探討學生評鑑教師教學因素構念在跨層級架構下的合理性，以作為學生評鑑教師教學多層次議題發展的立論依據。研究對象為東部某大學大學部180 班學生，班級人數介於13 和78人之間，總樣本數為6,568 人。經多層次模式競爭及多層次複核效度檢定發現，學生評鑑教師教學量表的因素構念在學生層次分為教學準備、教材內容、教學方法、教學評量、教學態度等五個次向度，但在班級層次卻僅以單一因素構念「學生評鑑教師教學」反映不同班級間的差異。因此，本研究建議在進行跨層級的量表建構及議題探討時，應先檢視跨層級的構念是否產生改變，透過跨層級的模式競爭及效度複核，忠實呈現因素構念在多層次下的樣貌，而非僅將組間層次及組內層次的因素結構假定為相等，以避免後續跨層級推論的謬誤。 This study examined the rationality of multilevel constructs of student ratings of instruction. The sample consisted of 180 undergraduate classes from a university on the east coast of Taiwan, with a class size ranging between 13 and 78, for a total sample of 6,568 students. The results indicate five sub-dimensions of student ratings of instruction, including Teaching Preparation, Teaching Materials, Teaching Methods, Learning Evaluation, and Teaching Attitudes. The same instrument within class level, however, indicates only one single dimension. This study suggests examining the changes in cross-level constructs through multilevel model competition and multilevel cross-validation approaches. Examining the validity and rationality of the cross-level model presents the effect of each dimension under multilevel constructs to avoid the fallacy of cross-level inference.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Bootstrap-Based Inference for Cube Root Consistent Estimators
Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi
This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known to be inconsis......This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...
基于交叉验证支持向量机的短期负荷预测%Forecasting of Short-term Power Load Based on Cross Validation of SVM
李洪江; 刘栋
2016-01-01
基于交叉验证理论，对支持向量机的回归参数进行优化，提出短期负荷预测的方法。通过交叉验证对支持向量机的回归参数优化，选择最佳的回归参数对样本进行预测，提高预测结果的准确性。仿真算例和预测结果表明，本文所提的方法具有较高的预测精度，且易于实现。%Optimize the support vector machine regression parameters based on the theory of cross validation, put forward short-term wind speed forecasting method. optimize the support vector machine regression parameter By cross validation, choice the optimal regression parameters for prediction, improve the prediction accuracy of results. The simulation results and the prediction results show that this method can effectively improve forecasting precision, and easy to realize.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Consistency of trace norm minimization
Bach, Francis
2007-01-01
Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled.
High SNR Consistent Compressive Sensing
Kallummil, Sreejith; Kalyani, Sheetal
2017-01-01
High signal to noise ratio (SNR) consistency of model selection criteria in linear regression models has attracted a lot of attention recently. However, most of the existing literature on high SNR consistency deals with model order selection. Further, the limited literature available on the high SNR consistency of subset selection procedures (SSPs) is applicable to linear regression with full rank measurement matrices only. Hence, the performance of SSPs used in underdetermined linear models ...
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Barrows, Timothy T.; Juggins, Steve
2005-04-01
We present new last glacial maximum (LGM) sea-surface temperature (SST) maps for the oceans around Australia based on planktonic foraminifera assemblages. To provide the most reliable SST estimates we use the modern analog technique, the revised analog method, and artificial neural networks in conjunction with an expanded modern core top database. All three methods produce similar quality predictions and the root mean squared error of the consensus prediction (the average of the three) under cross-validation is only ±0.77 °C. We determine LGM SST using data from 165 cores, most of which have good age control from oxygen isotope stratigraphy and radiocarbon dates. The coldest SST occurred at 20,500±1400 cal yr BP, predating the maximum in oxygen isotope records at 18,200±1500 cal yr BP. During the LGM interval we observe cooling within the tropics of up to 4 °C in the eastern Indian Ocean, and mostly between 0 and 3 °C elsewhere along the equator. The high latitudes cooled by the greatest degree, a maximum of 7-9 °C in the southwest Pacific Ocean. Our maps improve substantially on previous attempts by making higher quality temperature estimates, using more cores, and improving age control.
Consistency argued students of fluid
Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma
2017-01-01
Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.
Coordinating user interfaces for consistency
Nielsen, Jakob
2001-01-01
In the years since Jakob Nielsen's classic collection on interface consistency first appeared, much has changed, and much has stayed the same. On the one hand, there's been exponential growth in the opportunities for following or disregarding the principles of interface consistency-more computers, more applications, more users, and of course the vast expanse of the Web. On the other, there are the principles themselves, as persistent and as valuable as ever. In these contributed chapters, you'll find details on many methods for seeking and enforcing consistency, along with bottom-line analys
Consistency of Random Survival Forests.
Ishwaran, Hemant; Kogalur, Udaya B
2010-07-01
We prove uniform consistency of Random Survival Forests (RSF), a newly introduced forest ensemble learner for analysis of right-censored survival data. Consistency is proven under general splitting rules, bootstrapping, and random selection of variables-that is, under true implementation of the methodology. Under this setting we show that the forest ensemble survival function converges uniformly to the true population survival function. To prove this result we make one key assumption regarding the feature space: we assume that all variables are factors. Doing so ensures that the feature space has finite cardinality and enables us to exploit counting process theory and the uniform consistency of the Kaplan-Meier survival function.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Process Fairness and Dynamic Consistency
S.T. Trautmann (Stefan); P.P. Wakker (Peter)
2010-01-01
textabstractAbstract: When process fairness deviates from outcome fairness, dynamic inconsistencies can arise as in nonexpected utility. Resolute choice (Machina) can restore dynamic consistency under nonexpected utility without using Strotz's precommitment. It can similarly justify dynamically
Gravitation, Causality, and Quantum Consistency
Hertzberg, Mark P
2016-01-01
We examine the role of consistency with causality and quantum mechanics in determining the properties of gravitation. We begin by constructing two different classes of interacting theories of massless spin 2 particles -- gravitons. One involves coupling the graviton with the lowest number of derivatives to matter, the other involves coupling the graviton with higher derivatives to matter, making use of the linearized Riemann tensor. The first class requires an infinite tower of terms for consistency, which is known to lead uniquely to general relativity. The second class only requires a finite number of terms for consistency, which appears as a new class of theories of massless spin 2. We recap the causal consistency of general relativity and show how this fails in the second class for the special case of coupling to photons, exploiting related calculations in the literature. In an upcoming publication [1] this result is generalized to a much broader set of theories. Then, as a causal modification of general ...
Consistency and stability of recombinant fermentations.
Wiebe, M E; Builder, S E
1994-01-01
Production of proteins of consistent quality in heterologous, genetically-engineered expression systems is dependent upon identifying the manufacturing process parameters which have an impact on product structure, function, or purity, validating acceptable ranges for these variables, and performing the manufacturing process as specified. One of the factors which may affect product consistency is genetic instability of the primary product sequence, as well as instability of genes which code for proteins responsible for post-translational modification of the product. Approaches have been developed for mammalian expression systems to assure that product quality is not changing through mechanisms of genetic instability. Sensitive protein analytical methods, particularly peptide mapping, are used to evaluate product structure directly, and are more sensitive in detecting genetic instability than is direct genetic analysis by nucleotide sequencing of the recombinant gene or mRNA. These methods are being employed to demonstrate that the manufacturing process consistently yields a product of defined structure from cells cultured through the range of cell ages used in the manufacturing process and well beyond the maximum cell age defined for the process. The combination of well designed validation studies which demonstrate consistent product quality as a function of cell age, and rigorous quality control of every product lot by sensitive protein analytical methods provide the necessary assurance that product structure is not being altered through mechanisms of mutation and selection.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Time-consistent and market-consistent evaluations
Pelsser, A.; Stadje, M.A.
2014-01-01
We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from mathemati
Consistency of muscle synergies during pedaling across different mechanical constraints.
Hug, François; Turpin, Nicolas A; Couturier, Antoine; Dorel, Sylvain
2011-07-01
The purpose of the present study was to determine whether muscle synergies are constrained by changes in the mechanics of pedaling. The decomposition algorithm used to identify muscle synergies was based on two components: "muscle synergy vectors," which represent the relative weighting of each muscle within each synergy, and "synergy activation coefficients," which represent the relative contribution of muscle synergy to the overall muscle activity pattern. We hypothesized that muscle synergy vectors would remain fixed but that synergy activation coefficients could vary, resulting in observed variations in individual electromyographic (EMG) patterns. Eleven cyclists were tested during a submaximal pedaling exercise and five all-out sprints. The effects of torque, maximal torque-velocity combination, and posture were studied. First, muscle synergies were extracted from each pedaling exercise independently using non-negative matrix factorization. Then, to cross-validate the results, muscle synergies were extracted from the entire data pooled across all conditions, and muscle synergy vectors extracted from the submaximal exercise were used to reconstruct EMG patterns of the five all-out sprints. Whatever the mechanical constraints, three muscle synergies accounted for the majority of variability [mean variance accounted for (VAF) = 93.3 ± 1.6%, VAF (muscle) > 82.5%] in the EMG signals of 11 lower limb muscles. In addition, there was a robust consistency in the muscle synergy vectors. This high similarity in the composition of the three extracted synergies was accompanied by slight adaptations in their activation coefficients in response to extreme changes in torque and posture. Thus, our results support the hypothesis that these muscle synergies reflect a neural control strategy, with only a few timing adjustments in their activation regarding the mechanical constraints.
Market-consistent actuarial valuation
Wüthrich, Mario V
2016-01-01
This is the third edition of this well-received textbook, presenting powerful methods for measuring insurance liabilities and assets in a consistent way, with detailed mathematical frameworks that lead to market-consistent values for liabilities. Topics covered are stochastic discounting with deflators, valuation portfolio in life and non-life insurance, probability distortions, asset and liability management, financial risks, insurance technical risks, and solvency. Including updates on recent developments and regulatory changes under Solvency II, this new edition of Market-Consistent Actuarial Valuation also elaborates on different risk measures, providing a revised definition of solvency based on industry practice, and presents an adapted valuation framework which takes a dynamic view of non-life insurance reserving risk.
Consistent Histories in Quantum Cosmology
Craig, David A; 10.1007/s10701-010-9422-6
2010-01-01
We illustrate the crucial role played by decoherence (consistency of quantum histories) in extracting consistent quantum probabilities for alternative histories in quantum cosmology. Specifically, within a Wheeler-DeWitt quantization of a flat Friedmann-Robertson-Walker cosmological model sourced with a free massless scalar field, we calculate the probability that the univese is singular in the sense that it assumes zero volume. Classical solutions of this model are a disjoint set of expanding and contracting singular branches. A naive assessment of the behavior of quantum states which are superpositions of expanding and contracting universes may suggest that a "quantum bounce" is possible i.e. that the wave function of the universe may remain peaked on a non-singular classical solution throughout its history. However, a more careful consistent histories analysis shows that for arbitrary states in the physical Hilbert space the probability of this Wheeler-DeWitt quantum universe encountering the big bang/crun...
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Spindler, Helle; Kruse, Charlotte; Zwisler, Ann-Dorthe
2009-01-01
BACKGROUND: Type D personality is an emerging risk factor in cardiovascular disease. We examined the psychometric properties of the Danish version of the Type D Scale (DS14) and the impact of Type D on anxiety and depression in cardiac patients. METHOD: Cardiac patients (n = 707) completed the DS14......, the Hospital Anxiety and Depression Scale, and the Eysenck Personality Questionnaire. A subgroup (n = 318) also completed the DS14 at 3 or 12 weeks. RESULTS: The two-factor structure of the DS14 was confirmed; the subscales negative affectivity and social inhibition were shown to be valid, internally...... consistent (Cronbach's alpha = 0.87/0.91; mean inter-item correlations = 0.49/0.59), and stable over 3 and 12 weeks (r = 0.85/0.78; 0.83/0.79; ps anxiety (beta, 0.49; p
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
The Importance of being consistent
Wasserman, Adam; Jiang, Kaili; Kim, Min-Cheol; Sim, Eunji; Burke, Kieron
2016-01-01
We review the role of self-consistency in density functional theory. We apply a recent analysis to both Kohn-Sham and orbital-free DFT, as well as to Partition-DFT, which generalizes all aspects of standard DFT. In each case, the analysis distinguishes between errors in approximate functionals versus errors in the self-consistent density. This yields insights into the origins of many errors in DFT calculations, especially those often attributed to self-interaction or delocalization error. In many classes of problems, errors can be substantially reduced by using `better' densities. We review the history of these approaches, many of their applications, and give simple pedagogical examples.
Consistent supersymmetric decoupling in cosmology
Sousa Sánchez, Kepa
2012-01-01
The present work discusses several problems related to the stability of ground states with broken supersymmetry in supergravity, and to the existence and stability of cosmic strings in various supersymmetric models. In particular we study the necessary conditions to truncate consistently a sector o
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Prieto, Elena; Marti-Climent, Josep M. [Clinica Universidad de Navarra, Nuclear Medicine Department, Pamplona (Spain); Collantes, Maria; Molinet, Francisco [Center for Applied Medical Research (CIMA) and Clinica Universidad de Navarra, Small Animal Imaging Research Unit, Pamplona (Spain); Delgado, Mercedes; Garcia-Garcia, Luis; Pozo, Miguel A. [Universidad Complutense de Madrid, Brain Mapping Unit, Madrid (Spain); Juri, Carlos [Center for Applied Medical Research (CIMA), Movement Disorders Group, Neurosciences Division, Pamplona (Spain); Clinica Universidad de Navarra, Department of Neurology and Neurosurgery, Pamplona (Spain); Centro de Investigacion Biomedica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), Pamplona (Spain); Pontificia Universidad Catolica de Chile, Department of Neurology, Santiago (Chile); Fernandez-Valle, Maria E. [Universidad Complutense de Madrid, MRI Research Center, Madrid (Spain); Gago, Belen [Center for Applied Medical Research (CIMA), Movement Disorders Group, Neurosciences Division, Pamplona (Spain); Centro de Investigacion Biomedica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), Pamplona (Spain); Obeso, Jose A. [Center for Applied Medical Research (CIMA), Movement Disorders Group, Neurosciences Division, Pamplona (Spain); Clinica Universidad de Navarra, Department of Neurology and Neurosurgery, Pamplona (Spain); Centro de Investigacion Biomedica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), Pamplona (Spain); Penuelas, Ivan [Clinica Universidad de Navarra, Nuclear Medicine Department, Pamplona (Spain); Center for Applied Medical Research (CIMA) and Clinica Universidad de Navarra, Small Animal Imaging Research Unit, Pamplona (Spain)
2011-12-15
Although specific positron emission tomography (PET) scanners have been developed for small animals, spatial resolution remains one of the most critical technical limitations, particularly in the evaluation of the rodent brain. The purpose of the present study was to examine the reliability of voxel-based statistical analysis (Statistical Parametric Mapping, SPM) applied to {sup 18}F-fluorodeoxyglucose (FDG) PET images of the rat brain, acquired on a small animal PET not specifically designed for rodents. The gold standard for the validation of the PET results was the autoradiography of the same animals acquired under the same physiological conditions, reconstructed as a 3-D volume and analysed using SPM. Eleven rats were studied under two different conditions: conscious or under inhalatory anaesthesia during {sup 18}F-FDG uptake. All animals were studied in vivo under both conditions in a dedicated small animal Philips MOSAIC PET scanner and magnetic resonance images were obtained for subsequent spatial processing. Then, rats were randomly assigned to a conscious or anaesthetized group for postmortem autoradiography, and slices from each animal were aligned and stacked to create a 3-D autoradiographic volume. Finally, differences in {sup 18}F-FDG uptake between conscious and anaesthetized states were assessed from PET and autoradiography data by SPM analysis and results were compared. SPM results of PET and 3-D autoradiography are in good agreement and led to the detection of consistent cortical differences between the conscious and anaesthetized groups, particularly in the bilateral somatosensory cortices. However, SPM analysis of 3-D autoradiography also highlighted differences in the thalamus that were not detected with PET. This study demonstrates that any difference detected with SPM analysis of MOSAIC PET images of rat brain is detected also by the gold standard autoradiographic technique, confirming that this methodology provides reliable results, although
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Consistence of Network Filtering Rules
SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian
2004-01-01
The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.
Self-consistent triaxial models
Sanders, Jason L
2015-01-01
We present self-consistent triaxial stellar systems that have analytic distribution functions (DFs) expressed in terms of the actions. These provide triaxial density profiles with cores or cusps at the centre. They are the first self-consistent triaxial models with analytic DFs suitable for modelling giant ellipticals and dark haloes. Specifically, we study triaxial models that reproduce the Hernquist profile from Williams & Evans (2015), as well as flattened isochrones of the form proposed by Binney (2014). We explore the kinematics and orbital structure of these models in some detail. The models typically become more radially anisotropic on moving outwards, have velocity ellipsoids aligned in Cartesian coordinates in the centre and aligned in spherical polar coordinates in the outer parts. In projection, the ellipticity of the isophotes and the position angle of the major axis of our models generally changes with radius. So, a natural application is to elliptical galaxies that exhibit isophote twisting....
On Modal Refinement and Consistency
Nyman, Ulrik; Larsen, Kim Guldstrand; Wasowski, Andrzej
2007-01-01
Almost 20 years after the original conception, we revisit several fundamental question about modal transition systems. First, we demonstrate the incompleteness of the standard modal refinement using a counterexample due to Hüttel. Deciding any refinement, complete with respect to the standard...... notions of implementation, is shown to be computationally hard (co-NP hard). Second, we consider four forms of consistency (existence of implementations) for modal specifications. We characterize each operationally, giving algorithms for deciding, and for synthesizing implementations, together...
Variable selection for modeling the absolute magnitude at maximum of Type Ia supernovae
Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi
2015-06-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates for the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernovae: (i) The absolute magnitude at maximum depends on the color and light-curve width. (ii) The light-curve width depends on the strength of Si II. Recent studies have suggested adding more variables in order to explain the absolute magnitude. However, our analysis does not support adding any other variables in order to have a better generalization error.
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Tri-Sasakian consistent reduction
Cassani, Davide
2011-01-01
We establish a universal consistent Kaluza-Klein truncation of M-theory based on seven-dimensional tri-Sasakian structure. The four-dimensional truncated theory is an N=4 gauged supergravity with three vector multiplets and a non-abelian gauge group, containing the compact factor SO(3). Consistency follows from the fact that our truncation takes exactly the same form as a left-invariant reduction on a specific coset manifold, and we show that the same holds for the various universal consistent truncations recently put forward in the literature. We describe how the global symmetry group SL(2,R) x SO(6,3) is embedded in the symmetry group E7(7) of maximally supersymmetric reductions, and make the connection with the approach of Exceptional Generalized Geometry. Vacuum AdS4 solutions spontaneously break the amount of supersymmetry from N=4 to N=3,1 or 0, and the spectrum contains massive modes. We find a subtruncation to minimal N=3 gauged supergravity as well as an N=1 subtruncation to the SO(3)-invariant secto...
Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.
1986-05-01
consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Souto-Iglesias, Antonio; González, Leo M; Cercos-Pita, Jose L
2013-01-01
The consistency of Moving Particle Semi-implicit (MPS) method in reproducing the gradient, divergence and Laplacian differential operators is discussed in the present paper. Its relation to the Smoothed Particle Hydrodynamics (SPH) method is rigorously established. The application of the MPS method to solve the Navier-Stokes equations using a fractional step approach is treated, unveiling inconsistency problems when solving the Poisson equation for the pressure. A new corrected MPS method incorporating boundary terms is proposed. Applications to one dimensional boundary value Dirichlet and mixed Neumann-Dirichlet problems and to two-dimensional free-surface flows are presented.
Measuring process and knowledge consistency
Edwards, Kasper; Jensen, Klaes Ladeby; Haug, Anders
2007-01-01
with a 5 point Liker scale and a corresponding scoring system. Process consistency is measured by using a first-person drawing tool with the respondent in the centre. Respondents sketch the sequence of steps and people they contact when configuring a product. The methodology is tested in one company...... for granted; rather the contrary, and attempting to implement a configuration system may easily ignite a political battle. This is because stakes are high in the sense that the rules and processes chosen may only reflect one part of the practice, ignoring a majority of the employees. To avoid this situation...
Hapke Ulfert
2011-08-01
Full Text Available Abstract Background Based on the general approach of locus of control, health locus of control (HLOC concerns control-beliefs due to illness, sickness and health. HLOC research results provide an improved understanding of health related behaviour and patients' compliance in medical care. HLOC research distinguishes between beliefs due to Internality, Externality powerful Others (POs and Externality Chance. However, evidences for differentiating the POs dimension were found. Previous factor analyses used selected and predominantly clinical samples, while non-clinical studies are rare. The present study is the first analysis of the HLOC structure based on a large representative general population sample providing important information for non-clinical research and public health care. Methods The standardised German questionnaire which assesses HLOC was used in a representative adult general population sample for a region in Northern Germany (N = 4,075. Data analyses used ordinal factor analyses in LISREL and Mplus. Alternative theory-driven models with one to four latent variables were compared using confirmatory factor analysis. Fit indices, chi-square difference tests, residuals and factor loadings were considered for model comparison. Exploratory factor analysis was used for further model development. Results were cross-validated splitting the total sample randomly and using the cross-validation index. Results A model with four latent variables (Internality, Formal Help, Informal Help and Chance best represented the HLOC construct (three-dimensional model: normed chi-square = 9.55; RMSEA = 0.066; CFI = 0.931; SRMR = 0.075; four-dimensional model: normed chi-square = 8.65; RMSEA = 0.062; CFI = 0.940; SRMR = 0.071; chi-square difference test: p Conclusions Future non-clinical HLOC studies in western cultures should consider four dimensions of HLOC: Internality, Formal Help, Informal Help and Chance. However, the standardised German instrument
一种基于交叉验证的稳健SL0目标参数提取算法%Cross validation based robust-SL0 algorithm for target parameter extraction
贺亚鹏; 庄珊娜; 张燕洪; 朱晓华
2012-01-01
Utilizing the space sparsity property of radar targets, a compressive sensing based pseudo-random step frequency radar (CS-PRSFR) is studied. Firstly, the CS-PRSFR targets echo is analyzed and the targets parameter extracting model is constructed. To solve the problem of inapplicability of traditional sparse signal reconstruction algorithms amid noise of unknown statistics, a cross validation based robust SLO (CV-RSLO) algorithm extracting the parameter of targets is proposed. Because of the better incoherence of the sensing matrix, the CS-PRSFR can obtain a higher range-velocity joint resolution performance. The proposed algorithm needs no prior information of the noise statistics, and the performance of its targets parameter extraction can rapidly approach the lower bound of the best estimator as the signal to noise ratio improving. Simulation results illuminate the correctness and efficiency of this method.%利用雷达目标在空间的稀疏特性,研究了一种基于压缩感知的伪随机频率步进雷达( compressive sensing based pseudo-random step frequency radar,CS-PRSFR).首先,在分析CS-PRSFR目标回波的基础上,建立了目标参数提取模型；然后,针对在噪声统计特性未知时,传统稀疏信号重构算法无法适用的问题,提出一种基于交叉验证的稳健SL0 (robust SL0 based on cross validation,CV-RSL0)目标参数提取算法.CS-PRSFR由于其感知矩阵较强的非相关性,可获得更高的距离-速度联合分辨性能；该算法无需已知噪声统计特性,随着信噪比的提高,其目标参数提取性能能够快速逼近最佳估计的下限.仿真结果表明该方法的正确性和有效性.
Maintaining consistency in distributed systems
Birman, Kenneth P.
1991-01-01
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
On the maximum backscattering cross section of passive linear arrays
Solymar, L.; Appel-Hansen, Jørgen
1974-01-01
The maximum backscattering cross section of an equispaced linear array connected to a reactive network and consisting of isotropic radiators is calculated forn = 2, 3, and 4 elements as a function of the incident angle and of the distance between the elements. On the basis of the results obtained...
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
Decentralized Consistent Updates in SDN
Nguyen, Thanh Dang
2017-04-10
We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and blackholes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.
The Consistent Vehicle Routing Problem
Groer, Christopher S [ORNL; Golden, Bruce [University of Maryland; Edward, Wasil [American University
2009-01-01
In the small package shipping industry (as in other industries), companies try to differentiate themselves by providing high levels of customer service. This can be accomplished in several ways, including online tracking of packages, ensuring on-time delivery, and offering residential pickups. Some companies want their drivers to develop relationships with customers on a route and have the same drivers visit the same customers at roughly the same time on each day that the customers need service. These service requirements, together with traditional constraints on vehicle capacity and route length, define a variant of the classical capacitated vehicle routing problem, which we call the consistent VRP (ConVRP). In this paper, we formulate the problem as a mixed-integer program and develop an algorithm to solve the ConVRP that is based on the record-to-record travel algorithm. We compare the performance of our algorithm to the optimal mixed-integer program solutions for a set of small problems and then apply our algorithm to five simulated data sets with 1,000 customers and a real-world data set with more than 3,700 customers. We provide a technique for generating ConVRP benchmark problems from vehicle routing problem instances given in the literature and provide our solutions to these instances. The solutions produced by our algorithm on all problems do a very good job of meeting customer service objectives with routes that have a low total travel time.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
A strong test of a maximum entropy model of trait-based community assembly.
Shipley, Bill; Laughlin, Daniel C; Sonnier, Grégory; Otfinowski, Rafael
2011-02-01
We evaluate the predictive power and generality of Shipley's maximum entropy (maxent) model of community assembly in the context of 96 quadrats over a 120-km2 area having a large (79) species pool and strong gradients. Quadrats were sampled in the herbaceous understory of ponderosa pine forests in the Coconino National Forest, Arizona, U.S.A. The maxent model accurately predicted species relative abundances when observed community-weighted mean trait values were used as model constraints. Although only 53% of the variation in observed relative abundances was associated with a combination of 12 environmental variables, the maxent model based only on the environmental variables provided highly significant predictive ability, accounting for 72% of the variation that was possible given these environmental variables. This predictive ability largely surpassed that of nonmetric multidimensional scaling (NMDS) or detrended correspondence analysis (DCA) ordinations. Using cross-validation with 1000 independent runs, the median correlation between observed and predicted relative abundances was 0.560 (the 2.5% and 97.5% quantiles were 0.045 and 0.825). The qualitative predictions of the model were also noteworthy: dominant species were correctly identified in 53% of the quadrats, 83% of rare species were correctly predicted to have a relative abundance of < 0.05, and the median predicted relative abundance of species actually absent from a quadrat was 5 x 10(-5).
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
Using maximum entropy model to predict protein secondary structure with single sequence.
Ding, Yong-Sheng; Zhang, Tong-Liang; Gu, Quan; Zhao, Pei-Ying; Chou, Kuo-Chen
2009-01-01
Prediction of protein secondary structure is somewhat reminiscent of the efforts by many previous investigators but yet still worthy of revisiting it owing to its importance in protein science. Several studies indicate that the knowledge of protein structural classes can provide useful information towards the determination of protein secondary structure. Particularly, the performance of prediction algorithms developed recently have been improved rapidly by incorporating homologous multiple sequences alignment information. Unfortunately, this kind of information is not available for a significant amount of proteins. In view of this, it is necessary to develop the method based on the query protein sequence alone, the so-called single-sequence method. Here, we propose a novel single-sequence approach which is featured by that various kinds of contextual information are taken into account, and that a maximum entropy model classifier is used as the prediction engine. As a demonstration, cross-validation tests have been performed by the new method on datasets containing proteins from different structural classes, and the results thus obtained are quite promising, indicating that the new method may become an useful tool in protein science or at least play a complementary role to the existing protein secondary structure prediction methods.
Kobayashi, A; Yoneda, T; Yoshikawa, M; Ikuno, M; Takenaka, H; Fukuoka, A; Narita, N; Nezu, K
2000-01-01
To assess the factors determining maximum exercise performance in patients with chronic obstructive pulmonary disease (COPD), we examined nutritional status with special reference to body composition and pulmonary function in 50 stable COPD patients. Nutritional status was evaluated by body weight and body composition, including fat mass (FM) and fat-free mass (FFM) assessed by bioelectrical impedance analysis (BIA). Exercise performance was evaluated by maximum oxygen uptake (Vo(2max)) on a cycle ergometer. A total of 50 patients (FEV(1) = 0.98 L) was divided randomly into either a study group (group A, n = 25) or validation group (group B, n = 25). Stepwise regression analysis was performed in group A to determine the best predictors of Vo(2max) from measurements of pulmonary function and nutritional status. Stepwise regression analysis revealed that Vo(2max) was predicted best by the following equation in group A: Vo(2max) (mL/min) = 10.223 x FFM (kg) + 4.188 x MVV (L/min) + 9.952 x DL(co) (mL/min/mmHg) - 127.9 (r = 0.84, p equation was then cross-validated in group B: Measured Vo(2max) (mL/min) = 1.554 x Predicted Vo(2max) (mL/min) - 324.0 (r = 0.87, p < 0.001). We conclude that FFM is an important factor in determining maximum exercise performance, along with pulmonary function parameters, in patients with COPD.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Lund-Hansen, Lars Chresten; Nielsen, Morten Holtegaard; Bruhn, Annette
2008-01-01
LB where Secchi depth reaches a minimum. Stratification showed a clear minimum in central LB where extended mixing prevails whereas strong stratification occurred in northern and southern LB. It is shown that mixed conditions in central LB were related to hydraulic control and super-critical flow...... conditions, as current derived energy for the mixing by comparison was too low. Nutrient (NO2 + NO3) concentrations remained high (~ 5 μM) in the bottom layer following the spring bloom. It is shown that there is a more or less continuous inflow of nutrient rich bottom water into central LB, which through...
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Flood damage curves for consistent global risk assessments
de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek
2016-04-01
Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Transformer Fault Diagnosis Based on C-SVC and Cross-validation Algorithm%基于支持向量机和交叉验证的变压器故障诊断
张艳; 吴玲
2012-01-01
为及时监测变压器潜伏性故障和准确诊断故障,提出基于优化惩罚因子C参数的支持向量机算法(C-SVC:C-support vector classification)和交叉验证算法相结合的变压器故障诊断方法.该方法利用变压器在故障时产生的氢气、甲烷、乙烷、乙烯、乙炔的体积分数数据建立训练集和测试集.在训练集中,该方法能自动优化出(寻找最佳)支持向量机的核函数的参数γ和惩罚因子C,利用优化的参数对训练集进行训练,可得到最佳的支持向量机模型,并用该模型对测试集进行分类,从而诊断出变压器的故障类型.变压器故障诊断实例分析结果证明,该方法可行,有效,且具有较高的故障诊断准确率.%A novel method for power transformer fault diagnosis based on the C-SVC (support vector classification with the optimized penalty parameter C) and cross-validation algorithm is presented, which can monitor and detect latent transformer faults timely and accurately. The training and testing sets of the C-SVC algorithm are built upon the data about the dissolved gases including hydrogen, methyl hydride, ethane, aethylenum and acetylene produced from transformer faults. Through the optimizing process of the penalty parameter and kernel function parameter y in the training set, the optimal support vector machine model can be gotten, with which the classification of data in the testing set can be conducted to determine fault features. The method has been validated by many practical examples to be feasible and efficient with high fault diagnosis accuracy.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Plant functional traits have globally consistent effects on competition.
Kunstler, Georges; Falster, Daniel; Coomes, David A; Hui, Francis; Kooyman, Robert M; Laughlin, Daniel C; Poorter, Lourens; Vanderwel, Mark; Vieilledent, Ghislain; Wright, S Joseph; Aiba, Masahiro; Baraloto, Christopher; Caspersen, John; Cornelissen, J Hans C; Gourlet-Fleury, Sylvie; Hanewinkel, Marc; Herault, Bruno; Kattge, Jens; Kurokawa, Hiroko; Onoda, Yusuke; Peñuelas, Josep; Poorter, Hendrik; Uriarte, Maria; Richardson, Sarah; Ruiz-Benito, Paloma; Sun, I-Fang; Ståhl, Göran; Swenson, Nathan G; Thompson, Jill; Westerlund, Bertil; Wirth, Christian; Zavala, Miguel A; Zeng, Hongcheng; Zimmerman, Jess K; Zimmermann, Niklaus E; Westoby, Mark
2016-01-14
Phenotypic traits and their associated trade-offs have been shown to have globally consistent effects on individual plant physiological functions, but how these effects scale up to influence competition, a key driver of community assembly in terrestrial vegetation, has remained unclear. Here we use growth data from more than 3 million trees in over 140,000 plots across the world to show how three key functional traits--wood density, specific leaf area and maximum height--consistently influence competitive interactions. Fast maximum growth of a species was correlated negatively with its wood density in all biomes, and positively with its specific leaf area in most biomes. Low wood density was also correlated with a low ability to tolerate competition and a low competitive effect on neighbours, while high specific leaf area was correlated with a low competitive effect. Thus, traits generate trade-offs between performance with competition versus performance without competition, a fundamental ingredient in the classical hypothesis that the coexistence of plant species is enabled via differentiation in their successional strategies. Competition within species was stronger than between species, but an increase in trait dissimilarity between species had little influence in weakening competition. No benefit of dissimilarity was detected for specific leaf area or wood density, and only a weak benefit for maximum height. Our trait-based approach to modelling competition makes generalization possible across the forest ecosystems of the world and their highly diverse species composition.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Consistent Design of Dependable Control Systems
Blanke, M.
1996-01-01
Design of fault handling in control systems is discussed, and a method for consistent design is presented.......Design of fault handling in control systems is discussed, and a method for consistent design is presented....
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
Collective behaviours in the stock market -- A maximum entropy approach
Bury, Thomas
2014-01-01
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial...
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Staal J Bart
2008-01-01
Full Text Available Abstract Background To investigate the factor structure, dimensionality and construct validity of the (5-item PRAFAB questionnaire score in women with stress urinary incontinence (stress UI. Methods A cross validation study design was used in a cohort of 279 patients who were randomly divided into Sample A or B. Sample A was used for preliminary exploratory factor analyses with promax rotation. Sample B provided an independent sample for confirming the premeditated and proposed factor structure and item retention. Internal consistency, item-total and subscale correlations were determined to assess the dimensionality. Construct validity was assessed by comparing factor-based scale means by clinical characteristics based on known relationships. Results Factor analyses resulted in a two-factor structure or subscales: items related to 'leakage severity' (protection, amount and frequency and items related to its 'perceived symptom impact' or consequences of stress UI on the patient's life (adjustment and body (or self image. The patterns of the factor loadings were fairly identical for both study samples. The two constructed subscales demonstrated adequate internal consistency with Cronbach's alphas in a range of 0.78 and 0.84 respectively. Scale scores differed by clinical characteristics according to the expectations and supported the construct validity of the scales. Conclusion The findings suggest a two-factorial structure of the PRAFAB questionnaire. Furthermore the results confirmed the internal consistency and construct validity as demonstrated in our previous study. The best description of the factorial structure of the PRAFAB questionnaire was given by a two-factor solution, measuring the stress UI leakage severity items and the perceived symptom impact items. Future research will be necessary to replicate these findings in different settings, type of UI and non-white women and men.
PM2.5 data reliability, consistency, and air quality assessment in five Chinese cities
Liang, Xuan; Li, Shuo; Zhang, Shuyi; Huang, Hui; Chen, Song Xi
2016-09-01
We investigate particulate matter (PM2.5) data reliability in five major Chinese cities: Beijing, Shanghai, Guangzhou, Chengdu, and Shenyang by cross-validating data from the U.S. diplomatic posts and the nearby Ministry of Environmental Protection sites based on 3 years' data from January 2013. The investigation focuses on the consistency in air quality assessment derived from the two data sources. It consists of studying (i) the occurrence length and percentage of different PM2.5 concentration ranges; (ii) the air quality assessment for each city; and (iii) the winter-heating effects in Beijing and Shenyang. Our analysis indicates that the two data sources produced highly consistent air quality assessment in the five cities. This is encouraging as it would inject a much needed confidence on the air pollution measurements from China. We also provide air quality assessments on the severity and trends of the fine particulate matter pollution in the five cities. The assessments are produced by statistically constructing the standard monthly meteorological conditions for each city, which are designed to minimize the effects of confounding factors due to yearly variations of some important meteorological variables. Our studies show that Beijing and Chengdu had the worst air quality, while Guangzhou and Shanghai faired the best among the five cities. Most of the five cities had their PM2.5 concentration decreased significantly in the last 2 years. By linking the air quality with the amount of energy consumed, our study suggests that the geographical configuration is a significant factor in a city's air quality management and economic development.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Dispersion sensitivity analysis & consistency improvement of APFSDS
Sangeeta Sharma Panda
2017-08-01
In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
MDCC: Multi-Data Center Consistency
Kraska, Tim; Franklin, Michael J; Madden, Samuel
2012-01-01
Replicating data across multiple data centers not only allows moving the data closer to the user and, thus, reduces latency for applications, but also increases the availability in the event of a data center failure. Therefore, it is not surprising that companies like Google, Yahoo, and Netflix already replicate user data across geographically different regions. However, replication across data centers is expensive. Inter-data center network delays are in the hundreds of milliseconds and vary significantly. Synchronous wide-area replication is therefore considered to be unfeasible with strong consistency and current solutions either settle for asynchronous replication which implies the risk of losing data in the event of failures, restrict consistency to small partitions, or give up consistency entirely. With MDCC (Multi-Data Center Consistency), we describe the first optimistic commit protocol, that does not require a master or partitioning, and is strongly consistent at a cost similar to eventually consiste...
A dual-consistency cache coherence protocol
Ros, Alberto; Jimborean, Alexandra
2015-01-01
Weak memory consistency models can maximize system performance by enabling hardware and compiler optimizations, but increase programming complexity since they do not match programmers’ intuition. The design of an efficient system with an intuitive memory model is an open challenge. This paper proposes SPEL, a dual-consistency cache coherence protocol which simultaneously guarantees the strongest memory consistency model provided by the hardware and yields improvements in both performance and ...
A new approach to hull consistency
Kolev Lubomir
2016-06-01
Full Text Available Hull consistency is a known technique to improve the efficiency of iterative interval methods for solving nonlinear systems describing steady-states in various circuits. Presently, hull consistency is checked in a scalar manner, i.e. successively for each equation of the nonlinear system with respect to a single variable. In the present poster, a new more general approach to implementing hull consistency is suggested which consists in treating simultaneously several equations with respect to the same number of variables.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Martial arts striking hand peak acceleration, accuracy and consistency.
Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A
2013-01-01
The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
On the maximum mass of magnetised white dwarfs
Chatterjee, D; Chamel, N; Novak, J; Oertel, M
2016-01-01
We develop a detailed and self-consistent numerical model for extremely-magnetised white dwarfs, which have been proposed as progenitors of overluminous Type Ia supernovae. This model can describe fully-consistent equilibria of magnetic stars in axial symmetry, with rotation, general-relativistic effects and realistic equations of state (including electron-ion interactions and taking into account Landau quantisation of electrons due to the magnetic field). We study the influence of each of these ingredients onto the white dwarf structure and, in particular, on their maximum mass. We perform an extensive stability analysis of such objects, with their highest surface magnetic fields reaching $\\sim 10^{13}~G$ (at which point the star adopts a torus-like shape). We confirm previous speculations that although very massive strongly magnetised white dwarfs could potentially exist, the onset of electron captures and pycnonuclear reactions may severely limit their stability. Finally, the emission of gravitational wave...
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Consistent estimators in random censorship semiparametric models
王启华
1996-01-01
For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.
Student Effort, Consistency, and Online Performance
Patron, Hilde; Lopez, Salvador
2011-01-01
This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas…
Consistent truncations with massive modes and holography
Cassani, Davide; Faedo, Anton F
2011-01-01
We review the basic features of some recently found consistent Kaluza-Klein truncations including massive modes. We emphasize the general ideas underlying the reduction procedure, then we focus on type IIB supergravity on 5-dimensional manifolds admitting a Sasaki-Einstein structure, which leads to half-maximal gauged supergravity in five dimensions. Finally, we comment on the holographic picture of consistency.
CONSISTENT AGGREGATION IN FOOD DEMAND SYSTEMS
Levedahl, J. William; Reed, Albert J.; Clark, J. Stephen
2002-01-01
Two aggregation schemes for food demand systems are tested for consistency with the Generalized Composite Commodity Theorem (GCCT). One scheme is based on the standard CES classification of food expenditures. The second scheme is based on the Food Guide Pyramid. Evidence is found that both schemes are consistent with the GCCT.
A Framework of Memory Consistency Models
胡伟武; 施巍松; 等
1998-01-01
Previous descriptions of memory consistency models in shared-memory multiprocessor systems are mainly expressed as constraints on the memory access event ordering and hence are hardware-centric.This paper presents a framework of memory consistency models which describes the memory consistency model on the behavior level.Based on the understanding that the behavior of an execution is determined by the execution order of conflicting accesses,a memory consistency model is defined as an interprocessor synchronization mechanism which orders the execution of operations from different processors.Synchronization order of an execution under certain consistency model is also defined.The synchronization order,together with the program order determines the behavior of an execution.This paper also presents criteria for correct program and correct implementation of consistency models.Regarding an implementation of a consistency model as certain memory event ordering constraints,this paper provides a method to prove the correctness of consistency model implementations,and the correctness of the lock-based cache coherence protocol is proved with this method.
Sticky continuous processes have consistent price systems
Bender, Christian; Pakkanen, Mikko; Sayit, Hasanjan
Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under arb...
Testing the visual consistency of web sites
Geest, van der Thea; Loorbach, Nicole
2005-01-01
Consistency in the visual appearance of Web pages is often checked by experts, such as designers or reviewers. This article reports a card sort study conducted to determine whether users rather than experts could distinguish visual (in-)consistency in Web elements and pages. The users proved to agre
Putting Consistent Theories Together in Institutions
应明生
1995-01-01
The problem of putting consistent theories together in institutions is discussed.A general necessary condition for consistency of the resulting theory is carried out,and some sufficient conditions are given for diagrams of theories in which shapes are tree bundles or directed graphs.Moreover,some transformations from complicated cases to simple ones are established.
Modeling and Testing Legacy Data Consistency Requirements
Nytun, J. P.; Jensen, Christian Søndergaard
2003-01-01
An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...
Sparse motion segmentation using multiple six-point consistencies
Zografos, Vasileios; Ellis, Liam
2010-01-01
We present a method for segmenting an arbitrary number of moving objects in image sequences using the geometry of 6 points in 2D to infer motion consistency. The method has been evaluated on the Hopkins 155 database and surpasses current state-of-the-art methods such as SSC, both in terms of overall performance on two and three motions but also in terms of maximum errors. The method works by ?nding initial clusters in the spatial domain, and then classifying each remaining point as belonging to the cluster that minimizes a motion consistency score. In contrast to most other motion segmentation methods that are based on an a?ne camera model, the proposed method is fully projective.
Zhao, Min; Chen, Yanming; Qu, Dacheng; Qu, Hong
2015-01-01
The substrates of a transporter are not only useful for inferring function of the transporter, but also important to discover compound-compound interaction and to reconstruct metabolic pathway. Though plenty of data has been accumulated with the developing of new technologies such as in vitro transporter assays, the search for substrates of transporters is far from complete. In this article, we introduce METSP, a maximum-entropy classifier devoted to retrieve transporter-substrate pairs (TSPs) from semistructured text. Based on the high quality annotation from UniProt, METSP achieves high precision and recall in cross-validation experiments. When METSP is applied to 182,829 human transporter annotation sentences in UniProt, it identifies 3942 sentences with transporter and compound information. Finally, 1547 confidential human TSPs are identified for further manual curation, among which 58.37% pairs with novel substrates not annotated in public transporter databases. METSP is the first efficient tool to extract TSPs from semistructured annotation text in UniProt. This tool can help to determine the precise substrates and drugs of transporters, thus facilitating drug-target prediction, metabolic network reconstruction, and literature classification.
M. F. Müller
2015-01-01
Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Identification of consistency in rating curve data: Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-04-01
Before calculating rating curve discharges, it is crucial to identify possible interruptions in data consistency. In this research, a methodology to perform this preliminary analysis is developed and validated. This methodology, called Bidirectional Reach (BReach), evaluates in each data point results of a rating curve model with randomly sampled parameter sets. The combination of a parameter set and a data point is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Moreover, a tolerance degree that defines satisfactory behavior of a sequence of model results is chosen. This tolerance degree equals the percentage of observations that are allowed to have non-acceptable model results. Subsequently, the results of the classification is used to assess the maximum left and right reach for each data point of a chronologically sorted time series. This maximum left and right reach in a gauging point represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. This analysis is repeated for a variety of tolerance degrees. Plotting results of this analysis for all data points and all tolerance degrees in a combined BReach plot enables the detection of changes in data consistency. Moreover, if consistent periods are detected, limits of these periods can be derived. The methodology is validated with various synthetic stage-discharge data sets and proves to be a robust technique to investigate temporal consistency of rating curve data. It provides satisfying results despite of low data availability, large errors in the estimated observational uncertainty, and a rating curve model that is known to cover only a limited part of the observations.
Inter-laboratory consistency of gait analysis measurements.
Benedetti, M G; Merlo, A; Leardini, A
2013-09-01
The dissemination of gait analysis as a clinical assessment tool requires the results to be consistent, irrespective of the laboratory. In this work a baseline assessment of between site consistency of one healthy subject examined at 7 different laboratories is presented. Anthropometric and spatio-temporal parameters, pelvis and lower limb joint rotations, joint sagittal moments and powers, and ground reaction forces were compared. The consistency between laboratories for single parameters was assessed by the median absolute deviation and maximum difference, for curves by linear regression. Twenty-one lab-to-lab comparisons were performed and averaged. Large differences were found between the characteristics of the laboratories (i.e. motion capture systems and protocols). Different values for the anthropometric parameters were found, with the largest variability for a pelvis measurement. The spatio-temporal parameters were in general consistent. Segment and joint kinematics consistency was in general high (R2>0.90), except for hip and knee joint rotations. The main difference among curves was a vertical shift associated to the corresponding value in the static position. The consistency between joint sagittal moments ranged form R2=0.90 at the ankle to R2=0.66 at the hip, the latter was increasing when comparing separately laboratories using the same protocol. Pattern similarity was good for ankle power but not satisfactory for knee and hip power. The force was found the most consistent, as expected. The differences found were in general lower than the established minimum detectable changes for gait kinematics and kinetics for healthy adults.
Guillemot, Sylvain
2008-01-01
Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, elying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1...
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
On the Initial State and Consistency Relations
Berezhiani, Lasha
2014-01-01
We study the effect of the initial state on the consistency conditions for adiabatic perturbations. In order to be consistent with the constraints of General Relativity, the initial state must be diffeomorphism invariant. As a result, we show that initial wavefunctional/density matrix has to satisfy a Slavnov-Taylor identity similar to that of the action. We then investigate the precise ways in which modified initial states can lead to violations of the consistency relations. We find two independent sources of violations: i) the state can include initial non-Gaussianities; ii) even if the initial state is Gaussian, such as a Bogoliubov state, the modified 2-point function can modify the q->0 analyticity properties of the vertex functional and result in violations of the consistency relations.
On the initial state and consistency relations
Berezhiani, Lasha; Khoury, Justin, E-mail: lashaber@sas.upenn.edu, E-mail: jkhoury@sas.upenn.edu [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States)
2014-09-01
We study the effect of the initial state on the consistency conditions for adiabatic perturbations. In order to be consistent with the constraints of General Relativity, the initial state must be diffeomorphism invariant. As a result, we show that initial wavefunctional/density matrix has to satisfy a Slavnov-Taylor identity similar to that of the action. We then investigate the precise ways in which modified initial states can lead to violations of the consistency relations. We find two independent sources of violations: i) the state can include initial non-Gaussianities; ii) even if the initial state is Gaussian, such as a Bogoliubov state, the modified 2-point function can modify the q-vector → 0 analyticity properties of the vertex functional and result in violations of the consistency relations.
Self-Consistent Asset Pricing Models
Malevergne, Y
2006-01-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alpha's and beta's of the factor model are unobservable. Self-consistency leads to renormalized beta's with zero effective alpha's, which are observable with standard OLS regressions. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value $\\alpha_i$ at the origin between an asset $i$'s return and the proxy's return. Self-consistency also introduces ``orthogonality'' and ``normality'' conditions linking the beta's, alpha's (as well as the residuals) and the weights of the proxy por...
Quasiparticle self-consistent GW theory.
van Schilfgaarde, M; Kotani, Takao; Faleev, S
2006-06-09
In past decades the scientific community has been looking for a reliable first-principles method to predict the electronic structure of solids with high accuracy. Here we present an approach which we call the quasiparticle self-consistent approximation. It is based on a kind of self-consistent perturbation theory, where the self-consistency is constructed to minimize the perturbation. We apply it to selections from different classes of materials, including alkali metals, semiconductors, wide band gap insulators, transition metals, transition metal oxides, magnetic insulators, and rare earth compounds. Apart from some mild exceptions, the properties are very well described, particularly in weakly correlated cases. Self-consistency dramatically improves agreement with experiment, and is sometimes essential. Discrepancies with experiment are systematic, and can be explained in terms of approximations made.
Consistency in the World Wide Web
Thomsen, Jakob Grauenkjær
Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...
Consistency Relations for Large Field Inflation
Chiba, Takeshi
2014-01-01
Consistency relations for chaotic inflation with a monomial potential and natural inflation and hilltop inflation are given which involve the scalar spectral index $n_s$, the tensor-to-scalar ratio $r$ and the running of the spectral index $\\alpha$. The measurement of $\\alpha$ with $O(10^{-3})$ and the improvement in the measurement of $n_s$ could discriminate monomial model from natural/hilltop inflation models. A consistency region for general large field models is also presented.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Entropy-based consistent model driven architecture
Niepostyn, Stanisław Jerzy
2016-09-01
A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.
Consistency and Derangements in Brane Tilings
Hanany, Amihay; Ramgoolam, Sanjaye; Seong, Rak-Kyeong
2015-01-01
Brane tilings describe Lagrangians (vector multiplets, chiral multiplets, and the superpotential) of four dimensional $\\mathcal{N}=1$ supersymmetric gauge theories. These theories, written in terms of a bipartite graph on a torus, correspond to worldvolume theories on $N$ D$3$-branes probing a toric Calabi-Yau threefold singularity. A pair of permutations compactly encapsulates the data necessary to specify a brane tiling. We show that geometric consistency for brane tilings, which ensures that the corresponding quantum field theories are well behaved, imposes constraints on the pair of permutations, restricting certain products constructed from the pair to have no one-cycles. Permutations without one-cycles are known as derangements. We illustrate this formulation of consistency with known brane tilings. Counting formulas for consistent brane tilings with an arbitrary number of chiral bifundamental fields are written down in terms of delta functions over symmetric groups.
Quantifying the consistency of scientific databases
Šubelj, Lovro; Boshkoska, Biljana Mileva; Kastrin, Andrej; Levnajić, Zoran
2015-01-01
Science is a social process with far-reaching impact on our modern society. In the recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies.
Self-consistent Green's function approaches
Barbieri, Carlo
2016-01-01
We present the fundamental techniques and working equations of many-body Green's function theory for calculating ground state properties and the spectral strength. Green's function methods closely relate to other polynomial scaling approaches discussed in chapters~8 and ~10. However, here we aim directly at a global view of the many-fermion structure. We derive the working equations for calculating many-body propagators, using both the Algebraic Diagrammatic Construction technique and the self-consistent formalism at finite temperature. Their implementation is discussed, as well as the the inclusion of three-nucleon interactions. The self-consistency feature is essential to guarantee thermodynamic consistency. The paring and neutron matter models introduced in previous chapters are solved and compared with the other methods in this book.
Personalized recommendation based on unbiased consistence
Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao
2015-08-01
Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.
A Revisit to Probability - Possibility Consistency Principles
Mamoni Dhar
2013-03-01
Full Text Available In this article, our main intention is to highlight the fact that the probable links between probability and possibility which were established by different authors at different point of time on the basis of some well known consistency principles cannot provide the desired result. That is why the paper discussed some prominent works for transformations between probability and possibility and finally aimed to suggest a new principle because none of the existing principles because none of them found the unique transformation. The new consistency principle which is suggested hereby would in turn replace all others that exist in the literature references by providing a reliable estimate of consistency between the two.Furthermore some properties of entropy of fuzzy numbers are also presented in this article.
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
to the equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Consistent matter couplings for Plebanski gravity
Tennie, Felix
2010-01-01
We develop a scheme for the minimal coupling of all standard types of tensor and spinor field matter to Plebanski gravity. This theory is a geometric reformulation of vacuum general relativity in terms of two-form frames and connection one-forms, and provides a covariant basis for various quantization approaches. Using the spinor formalism we prove the consistency of the newly proposed matter coupling by demonstrating the full equivalence of Plebanski gravity plus matter to Einstein--Cartan gravity. As a byproduct we also show the consistency of some previous suggestions for matter actions.
Consistent matter couplings for Plebanski gravity
Tennie, Felix; Wohlfarth, Mattias N. R.
2010-11-01
We develop a scheme for the minimal coupling of all standard types of tensor and spinor field matter to Plebanski gravity. This theory is a geometric reformulation of vacuum general relativity in terms of two-form frames and connection one-forms, and provides a covariant basis for various quantization approaches. Using the spinor formalism we prove the consistency of the newly proposed matter coupling by demonstrating the full equivalence of Plebanski gravity plus matter to Einstein-Cartan gravity. As a by-product we also show the consistency of some previous suggestions for matter actions.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
The maximum sizes of large scale structures in alternative theories of gravity
Bhattacharya, Sourav; Romano, Antonio Enea; Skordis, Constantinos; Tomaras, Theodore N
2016-01-01
The maximum size of a cosmic structure is given by the maximum turnaround radius -- the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulas for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulas agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the $\\Lambda$CDM value, by a factor $1 + \\frac{1}{3\\omega}$, where $\\omega\\gg 1$ is the Brans-Dicke parameter, implying consistency of the theory with current data.
Maximum entropy method applied to deblurring images on a MasPar MP-1 computer
Bonavito, N. L.; Dorband, John; Busse, Tim
1991-01-01
A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.
A maximum principle for diffusive Lotka-Volterra systems of two competing species
Chen, Chiun-Chuan; Hung, Li-Chang
2016-10-01
Using an elementary approach, we establish a new maximum principle for the diffusive Lotka-Volterra system of two competing species, which involves pointwise estimates of an elliptic equation consisting of the second derivative of one function, the first derivative of another function, and a quadratic nonlinearity. This maximum principle gives a priori estimates for the total mass of the two species. Moreover, applying it to the system of three competing species leads to a nonexistence theorem of traveling wave solutions.
Inverse feasibility problems of the inverse maximum ﬂow problems
Adrian Deaconu; Eleonor Ciurea
2013-04-01
A linear time method to decide if any inverse maximum ﬂow (denoted General Inverse Maximum Flow problems (IMFG)) problem has solution is deduced. If IMFG does not have solution, methods to transform IMFG into a feasible problem are presented. The methods consist of modifying as little as possible the restrictions to the variation of the bounds of the ﬂow. New inverse combinatorial optimization problems are introduced and solved.
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
A flowing avalanche is one which initiates as a slab and, if consisting of dry snow, will be enveloped in a turbulent snow dust cloud once the speed reaches about 10 m/s. A flowing avalanche has a dense core of flowing material which dominates the dynamics by serving as the driving force for downslope motion. The flow thickness typically on the order of 1 -10 m which is on the order of about 1% of the length of the flowing mass. We have collected estimates of maximum frontal speed um (m/s) from 118 avalanche events. The analysis is given here with the aim of using the maximum speed scaled with some measure of the terrain scale over which the avalanches ran. We have chosen two measures for scaling, from McClung (1990), McClung and Schaerer (2006) and Gauer (2012). The two measures are the √H0-;√S0-- (total vertical drop; total path length traversed). Our data consist of 118 avalanches with H0 (m)estimated and 106 with S0 (m)estimated. Of these, we have 29 values with H0 (m),S0 (m)and um (m/s)estimated accurately with the avalanche speeds measured all or nearly all along the path. The remainder of the data set includes approximate estimates of um (m/s)from timing the avalanche motion over a known section of the path where approximate maximum speed is expected and with either H0or S0or both estimated. Our analysis consists of fitting the values of um/√H0--; um/√S0- to probability density functions (pdf) to estimate the exceedance probability for the scaled ratios. In general, we found the best fits for the larger data sets to fit a beta pdf and for the subset of 29, we found a shifted log-logistic (s l-l) pdf was best. Our determinations were as a result of fitting the values to 60 different pdfs considering five goodness-of-fit criteria: three goodness-of-fit statistics :K-S (Kolmogorov-Smirnov); A-D (Anderson-Darling) and C-S (Chi-squared) plus probability plots (P-P) and quantile plots (Q-Q). For less than 10% probability of exceedance the results show that
Consistency in multi-viewpoint architectural design
Dijkman, R.M.; Dijkman, Remco Matthijs
2006-01-01
This thesis presents a framework that aids in preserving consistency in multi-viewpoint designs. In a multi-viewpoint design each stakeholder constructs his own design part. We call each stakeholder’s design part the view of that stakeholder. To construct his view, a stakeholder has a viewpoint.
Developing consistent time series landsat data products
The Landsat series satellite has provided earth observation data record continuously since early 1970s. There are increasing demands on having a consistent time series of Landsat data products. In this presentation, I will summarize the work supported by the USGS Landsat Science Team project from 20...
Developing consistent pronunciation models for phonemic variants
Davel, M
2006-09-01
Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...
On Consistency Maintenance In Service Discovery
Sundramoorthy, V.; Hartel, Pieter H.; Scholten, Johan
2005-01-01
Communication and node failures degrade the ability of a service discovery protocol to ensure Users receive the correct service information when the service changes. We propose that service discovery protocols employ a set of recovery techniques to recover from failures and regain consistency. We
On Consistency Maintenance In Service Discovery
Sundramoorthy, V.; Hartel, Pieter H.; Scholten, Johan
Communication and node failures degrade the ability of a service discovery protocol to ensure Users receive the correct service information when the service changes. We propose that service discovery protocols employ a set of recovery techniques to recover from failures and regain consistency. We
Consistent feeding positions of great tit parents
Lessells, C.M.; Poelman, E.H.; Mateman, A.C.; Cassey, P.
2006-01-01
When parent birds arrive at the nest to provision their young, their position on the nest rim may influence which chick or chicks are fed. As a result, the consistency of feeding positions of the individual parents, and the difference in position between the parents, may affect how equitably food is
Addendum to "On the consistency of MPS"
Souto-Iglesias, Antonio; González, Leo M; Cercos-Pita, Jose L
2013-01-01
The analogies between the Moving Particle Semi-implicit method (MPS) and Incompressible Smoothed Particle Hydrodynamics method (ISPH) are established in this note, as an extension of the MPS consistency analysis conducted in "Souto-Iglesias et al., Computer Physics Communications, 184(3), 2013."
On the existence of consistent price systems
Bayraktar, Erhan; Pakkanen, Mikko S.; Sayit, Hasanjan
2014-01-01
We formulate a sufficient condition for the existence of a consistent price system (CPS), which is weaker than the conditional full support condition (CFS). We use the new condition to show the existence of CPSs for certain processes that fail to have the CFS property. In particular this condition...
A self-consistent Maltsev pulse model
Buneman, O.
1985-04-01
A self-consistent model for an electron pulse propagating through a plasma is presented. In this model, the charge imbalance between plasma ions, plasma electrons and pulse electrons creates the travelling potential well in which the pulse electrons are trapped.
Consistent implementation of decisions in the brain.
James A R Marshall
Full Text Available Despite the complexity and variability of decision processes, motor responses are generally stereotypical and independent of decision difficulty. How is this consistency achieved? Through an engineering analogy we consider how and why a system should be designed to realise not only flexible decision-making, but also consistent decision implementation. We specifically consider neurobiologically-plausible accumulator models of decision-making, in which decisions are made when a decision threshold is reached. To trade-off between the speed and accuracy of the decision in these models, one can either adjust the thresholds themselves or, equivalently, fix the thresholds and adjust baseline activation. Here we review how this equivalence can be implemented in such models. We then argue that manipulating baseline activation is preferable as it realises consistent decision implementation by ensuring consistency of motor inputs, summarise empirical evidence in support of this hypothesis, and suggest that it could be a general principle of decision making and implementation. Our goal is therefore to review how neurobiologically-plausible models of decision-making can manipulate speed-accuracy trade-offs using different mechanisms, to consider which of these mechanisms has more desirable decision-implementation properties, and then review the relevant neuroscientific data on which mechanism brains actually use.
Consistency in multi-viewpoint architectural design
Dijkman, Remco Matthijs
2006-01-01
This thesis presents a framework that aids in preserving consistency in multi-viewpoint designs. In a multi-viewpoint design each stakeholder constructs his own design part. We call each stakeholder¿s design part the view of that stakeholder. To construct his view, a stakeholder has a viewpoint. Thi
Properties and Update Semantics of Consistent Views
1985-09-01
8217 PR.OPERTIES AND UPDATE SEMANTICS OF CONSISTENT VIEWS G. Gottlob Institute for Applied Mathematics C.N.H.., G<•nova, Italy Compnt.<•r Sden... Gottlob G., Paolini P., Zicari R., "Proving Properties of Programs ou Database Views", Dipartiuwnto di Elcttronica, Politecnko di Milano (in
Consistency Analysis of Network Traffic Repositories
Lastdrager, Elmer; Pras, Aiko
2009-01-01
Traffic repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffic that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for var
Consistency of Network Traffic Repositories: An Overview
Lastdrager, E.; Pras, A.
2009-01-01
Traffc repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffc that has been ﬂowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for vario
Effective soil hydraulic conductivity predicted with the maximum power principle
Westhoff, Martijn; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Zehe, Erwin; Dewals, Benjamin
2016-04-01
Drainage of water in soils happens for a large extent through preferential flowpaths, but these subsurface flowpaths are extremely difficult to observe or parameterize in hydrological models. To potentially overcome this problem, thermodynamic optimality principles have been suggested to predict effective parametrization of these (sub-grid) structures, such as the maximum entropy production principle or the equivalent maximum power principle. These principles have been successfully applied to predict heat transfer from the Equator to the Poles, or turbulent heat fluxes between the surface and the atmosphere. In these examples, the effective flux adapts itself to its boundary condition by adapting its effective conductance through the creation of e.g. convection cells. However, flow through porous media, such as soils, can only quickly adapt its effective flow conductance by creation of preferential flowpaths, but it is unknown if this is guided by the aim to create maximum power. Here we show experimentally that this is indeed the case: In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles. The experimental setup consists of two freely draining reservoirs connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. From the steady state potential difference and the observed flow through the aquifer, and effective hydraulic conductance can be determined. This observed conductance does correspond to the one maximizing power of the flux through the confined aquifer. Although this experiment is done in an idealized setting, it opens doors for better parameterizing hydrological models. Furthermore, it shows that hydraulic properties of soils are not static, but they change with changing boundary conditions. A potential limitation to the principle is that it only applies to steady state conditions
New Heuristics for Rooted Triplet Consistency
Soheil Jahangiri
2013-07-01
Full Text Available Rooted triplets are becoming one of the most important types of input for reconstructing rooted phylogenies. A rooted triplet is a phylogenetic tree on three leaves and shows the evolutionary relationship of the corresponding three species. In this paper, we investigate the problem of inferring the maximum consensus evolutionary tree from a set of rooted triplets. This problem is known to be APX-hard. We present two new heuristic algorithms. For a given set of m triplets on n species, the FastTree algorithm runs in O(m + α(nn2 time, where α(n is the functional inverse of Ackermann’s function. This is faster than any other previously known algorithms, although the outcome is less satisfactory. The Best Pair Merge with Total Reconstruction (BPMTR algorithm runs in O(mn3 time and, on average, performs better than any other previously known algorithms for this problem.
Consistency and variability in functional localisers
Duncan, Keith J.; Pattamadilok, Chotiga; Knierim, Iris; Devlin, Joseph T.
2009-01-01
A critical assumption underlying the use of functional localiser scans is that the voxels identified as the functional region-of-interest (fROI) are essentially the same as those activated by the main experimental manipulation. Intra-subject variability in the location of the fROI violates this assumption, reducing the sensitivity of the analysis and biasing the results. Here we investigated consistency and variability in fROIs in a set of 45 volunteers. They performed two functional localiser scans to identify word- and object-sensitive regions of ventral and lateral occipito-temporal cortex, respectively. In the main analyses, fROIs were defined as the category-selective voxels in each region and consistency was measured as the spatial overlap between scans. Consistency was greatest when minimally selective thresholds were used to define “active” voxels (p < 0.05 uncorrected), revealing that approximately 65% of the voxels were commonly activated by both scans. In contrast, highly selective thresholds (p < 10− 4 to 10− 6) yielded the lowest consistency values with less than 25% overlap of the voxels active in both scans. In other words, intra-subject variability was surprisingly high, with between one third and three quarters of the voxels in a given fROI not corresponding to those activated in the main task. This level of variability stands in striking contrast to the consistency seen in retinotopically-defined areas and has important implications for designing robust but efficient functional localiser scans. PMID:19289173
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
[Evolutionary process unveiled by the maximum genetic diversity hypothesis].
Huang, Yi-Min; Xia, Meng-Ying; Huang, Shi
2013-05-01
As two major popular theories to explain evolutionary facts, the neutral theory and Neo-Darwinism, despite their proven virtues in certain areas, still fail to offer comprehensive explanations to such fundamental evolutionary phenomena as the genetic equidistance result, abundant overlap sites, increase in complexity over time, incomplete understanding of genetic diversity, and inconsistencies with fossil and archaeological records. Maximum genetic diversity hypothesis (MGD), however, constructs a more complete evolutionary genetics theory that incorporates all of the proven virtues of existing theories and adds to them the novel concept of a maximum or optimum limit on genetic distance or diversity. It has yet to meet a contradiction and explained for the first time the half-century old Genetic Equidistance phenomenon as well as most other major evolutionary facts. It provides practical and quantitative ways of studying complexity. Molecular interpretation using MGD-based methods reveal novel insights on the origins of humans and other primates that are consistent with fossil evidence and common sense, and reestablished the important role of China in the evolution of humans. MGD theory has also uncovered an important genetic mechanism in the construction of complex traits and the pathogenesis of complex diseases. We here made a series of sequence comparisons among yeasts, fishes and primates to illustrate the concept of limit on genetic distance. The idea of limit or optimum is in line with the yin-yang paradigm in the traditional Chinese view of the universal creative law in nature.
THE MAXIMUM AND MINIMUM DEGREES OF RANDOM BIPARTITE MULTIGRAPHS
Chen Ailian; Zhang Fuji; Li Hao
2011-01-01
In this paper the authors generalize the classic random bipartite graph model, and define a model of the random bipartite multigraphs as follows: let m=m(n) be a positive integer-valued function on n and (n, m; {pk}) the probability space consisting of all the labeled bipartite multigraphs with two vertex sets A={a1,a2,...,an} and B= {b1, b2,..., bm}, in which the numbers taibj of the edges between any two vertices ai∈A and bj∈B are identically distributed independent random variables with distribution P{taibj}=k}=pk, k=0, 1, 2,..., where pk≥0 and ∑ pk=1. They obtain that Xc,d,A, the number of vertices in A with degree between c and d of Gn,m∈ (n, m;{Pk}) has asymptotically Poisson distribution, and answer the following two questions about the space (n,m; {pk}) with {pk} having geometric distribution, binomial distribution and Poisson distribution, respectively. Under which condition for {Pk} can there be a function D(n) such that almost every random multigraph Gnm∈ (n, m; {pk}) has maximum degree D(n) in A? under which condition for {pk} has almost every multigraph Gn,m∈ (n,m;{pk}) a unique vertex of maximum degree in A?
Healthy adults maximum oxygen uptake prediction from a six minute walking test
Nury Nusdwinuringtyas
2011-08-01
Full Text Available Background: A parameter is needed in medical activities or services to determine functional capacity. This study is aimed to produce functional capacity parameter for Indonesian adult as maximum O2.Methods: This study used 123 Indonesian healthy adult subjects (58 males and 65 females with a sedentary lifestyle, using a cross-sectional method.Results: Designed by using the followings: distance, body height, body weight, sex, age, maximum heart rate of six minute walking test and lung capacity (FEV and FVC, the study revealed a good correlation (except body weight with maximum O2. Three new formulas were proposed, which consisted of eight, six, and five variable respectively. Test of the new formula gave result of maximum O2 that is relevant to the golden standard maximum O2 using Cosmed® C-Pex.Conclusion: The Nury formula is the appropriate predictor of maximum oxygen uptake for healthy Indonesians adult as it is designed using Indonesian subjects (Mongoloid compared to the Cahalin’s formula (Caucasian. The Nury formula which consists of five variables is more applicable because it does not require any measurement tools neither specific competency. (Med J Indones 2011;20:195-200Keywords: maximum O2, Nury’s formula, six minute walking test
Optimal specific wavelength for maximum thrust production in undulatory propulsion.
Nangia, Nishant; Bale, Rahul; Chen, Nelson; Hanna, Yohanna; Patankar, Neelesh A
2017-01-01
What wavelengths do undulatory swimmers use during propulsion? In this work we find that a wide range of body/caudal fin (BCF) swimmers, from larval zebrafish and herring to fully-grown eels, use specific wavelength (ratio of wavelength to tail amplitude of undulation) values that fall within a relatively narrow range. The possible emergence of this constraint is interrogated using numerical simulations of fluid-structure interaction. Based on these, it was found that there is an optimal specific wavelength (OSW) that maximizes the swimming speed and thrust generated by an undulatory swimmer. The observed values of specific wavelength for BCF animals are relatively close to this OSW. The mechanisms underlying the maximum propulsive thrust for BCF swimmers are quantified and are found to be consistent with the mechanisms hypothesized in prior work. The adherence to an optimal value of specific wavelength in most natural hydrodynamic propulsors gives rise to empirical design criteria for man-made propulsors.
Self-consistency in Capital Markets
Benbrahim, Hamid
2013-03-01
Capital Markets are considered, at least in theory, information engines whereby traders contribute to price formation with their diverse perspectives. Regardless whether one believes in efficient market theory on not, actions by individual traders influence prices of securities, which in turn influence actions by other traders. This influence is exerted through a number of mechanisms including portfolio balancing, margin maintenance, trend following, and sentiment. As a result market behaviors emerge from a number of mechanisms ranging from self-consistency due to wisdom of the crowds and self-fulfilling prophecies, to more chaotic behavior resulting from dynamics similar to the three body system, namely the interplay between equities, options, and futures. This talk will address questions and findings regarding the search for self-consistency in capital markets.
Student Effort, Consistency and Online Performance
Hilde Patron
2011-07-01
Full Text Available This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a post-test and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.
Consistence beats causality in recommender systems
Zhu, Xuzhen; Hu, Zheng; Zhang, Ping; Zhou, Tao
2015-01-01
The explosive growth of information challenges people's capability in finding out items fitting to their own interests. Recommender systems provide an efficient solution by automatically push possibly relevant items to users according to their past preferences. Recommendation algorithms usually embody the causality from what having been collected to what should be recommended. In this article, we argue that in many cases, a user's interests are stable, and thus the previous and future preferences are highly consistent. The temporal order of collections then does not necessarily imply a causality relationship. We further propose a consistence-based algorithm that outperforms the state-of-the-art recommendation algorithms in disparate real data sets, including \\textit{Netflix}, \\textit{MovieLens}, \\textit{Amazon} and \\textit{Rate Your Music}.
A supersymmetric consistent truncation for conifold solutions
Cassani, Davide
2010-01-01
We establish a supersymmetric consistent truncation of type IIB supergravity on the T^{1,1} coset space, based on extending the Papadopoulos-Tseytlin ansatz to the full set of SU(2)xSU(2) invariant Kaluza-Klein modes. The five-dimensional model is a gauged N=4 supergravity with three vector multiplets, which incorporates various conifold solutions and is suitable for the study of their dynamics. By analysing the scalar potential we find a family of new non-supersymmetric AdS_5 extrema interpolating between a solution obtained long ago by Romans and a solution employing an Einstein metric on T^{1,1} different from the standard one. Finally, we discuss some simple consistent subtruncations preserving N=2 supersymmetry. One of them is compatible with the inclusion of smeared D7-branes.
Temporally consistent segmentation of point clouds
Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas
2014-06-01
We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.
Foundations of consistent couple stress theory
Hadjesfandiari, Ali R
2015-01-01
In this paper, we examine the recently developed skew-symmetric couple stress theory and demonstrate its inner consistency, natural simplicity and fundamental connection to classical mechanics. This hopefully will help the scientific community to overcome any ambiguity and skepticism about this theory, especially the validity of the skew-symmetric character of the couple-stress tensor. We demonstrate that in a consistent continuum mechanics, the response of infinitesimal elements of matter at each point decomposes naturally into a rigid body portion, plus the relative translation and rotation of these elements at adjacent points of the continuum. This relative translation and rotation captures the deformation in terms of stretches and curvatures, respectively. As a result, the continuous displacement field and its corresponding rotation field are the primary variables, which remarkably is in complete alignment with rigid body mechanics, thus providing a unifying basis. For further clarification, we also exami...
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Improving reflectance estimation by BRDF-consistent region clustering
无
2006-01-01
Previous studies in reflectance estimation generally require prior segmentation of an image into regions of uniform reflectance. Due to the measurement noise and limited sampling of the BRDF (bi-directional reflectance function) directions, such estimated results of reflectance are not accurate. In this paper, we propose a novel method for reducing uncertainty in reflectance estimates by merging image regions which have consistent reflectance observations. Each image region acts as a reflectance subspace, so merging of the image regions can result in subspace reduction. We propose a Bayesian segmentation framework to decrease the reflectance uncertainty by using novel merging criteria. Finally, a maximum likelihood reflectance estimation is made for each resulting image region. Experimental results verify the feasibility and superiority of this reflectance-oriented region merging method.
Nonlinear smoothing identification algorithm with application to data consistency checks
Idan, M.
1993-01-01
A parameter identification algorithm for nonlinear systems is presented. It is based on smoothing test data with successively improved sets of model parameters. The smoothing, which is iterative, provides all of the information needed to compute the gradients of the smoothing performance measure with respect to the parameters. The parameters are updated using a quasi-Newton procedure, until convergence is achieved. The advantage of this algorithm over standard maximum likelihood identification algorithms is the computational savings in calculating the gradient. This algorithm was used for flight-test data consistency checks based on a nonlinear model of aircraft kinematics. Measurement biases and scale factors were identified. The advantages of the presented algorithm and model are discussed.
Consistent Linearized Gravity in Brane Backgrounds
Aref'eva, I Ya; Mück, W; Viswanathan, K S; Volovich, I V
2000-01-01
A globally consistent treatment of linearized gravity in the Randall-Sundrum background with matter on the brane is formulated. Using a novel gauge, in which the transverse components of the metric are non-vanishing, the brane is kept straight. We analyze the gauge symmetries and identify the physical degrees of freedom of gravity. Our results underline the necessity for non-gravitational confinement of matter to the brane.
Self-consistent model of fermions
Yershov, V N
2002-01-01
We discuss a composite model of fermions based on three-flavoured preons. We show that the opposite character of the Coulomb and strong interactions between these preons lead to formation of complex structures reproducing three generations of quarks and leptons with all their quantum numbers and masses. The model is self-consistent (it doesn't use input parameters). Nevertheless, the masses of the generated structures match the experimental values.
Consistent formulation of the spacelike axial gauge
Burnel, A.; Van der Rest-Jaspers, M.
1983-12-15
The usual formulation of the spacelike axial gauge is afflicted with the difficulty that the metric is indefinite while no ghost is involved. We solve this difficulty by introducing a ghost whose elimination is such that the metric becomes positive for physical states. The technique consists in the replacement of the gauge condition nxA = 0 by the weaker one partial/sub 0/nxAroughly-equal0.
Security Policy: Consistency, Adjustments and Restraining Factors
Yang; Jiemian
2004-01-01
In the 2004 U.S. presidential election, despite well-divided domestic opinions and Kerry's appealing slogan of "Reversing the Trend," a slight majority still voted for George W. Bush in the end. It is obvious that, based on the author's analysis, security agenda such as counter-terrorism and Iraqi issue has contributed greatly to the reelection of Mr. Bush. This also indicates that the security policy of Bush's second term will basically be consistent.……
Self-consistent structure of metallic hydrogen
Straus, D. M.; Ashcroft, N. W.
1977-01-01
A calculation is presented of the total energy of metallic hydrogen for a family of face-centered tetragonal lattices carried out within the self-consistent phonon approximation. The energy of proton motion is large and proper inclusion of proton dynamics alters the structural dependence of the total energy, causing isotropic lattices to become favored. For the dynamic lattice the structural dependence of terms of third and higher order in the electron-proton interaction is greatly reduced from static lattice equivalents.
Radiometric consistency assessment of hyperspectral infrared sounders
Wang, L.; Y. Han; Jin, X.; Y. Chen; D. A. Tremblay
2015-01-01
The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark datasets for both inter-calibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly com...
The internal consistency of perfect competition
Jakob Kapeller; Stephan Pühringer
2010-01-01
This article surveys some arguments brought forward in defense of the theory of perfect competition. While some critics propose that the theory of perfect competition, and thus also the theory of the firm, are logically flawed, (mainstream) economists defend their most popular textbook model by a series of apparently different arguments. Here it is examined whether these arguments are comparable, consistent and convincing from the point of view of philosophy of science.
Cloud Standardization: Consistent Business Processes and Information
Razvan Daniel ZOTA
2013-01-01
Full Text Available Cloud computing represents one of the latest emerging trends in distributed computing that enables the existence of hardware infrastructure and software applications as services. The present paper offers a general approach to the cloud computing standardization as a mean of improving the speed of adoption for the cloud technologies. Moreover, this study tries to show out how organizations may achieve more consistent business processes while operating with cloud computing technologies.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamic consistency for Stochastic Optimal Control problems
Carpentier, Pierre; Cohen, Guy; De Lara, Michel; Girardeau, Pierre
2010-01-01
For a sequence of dynamic optimization problems, we aim at discussing a notion of consistency over time. This notion can be informally introduced as follows. At the very first time step $t_0$, the decision maker formulates an optimization problem that yields optimal decision rules for all the forthcoming time step $t_0, t_1, ..., T$; at the next time step $t_1$, he is able to formulate a new optimization problem starting at time $t_1$ that yields a new sequence of optimal decision rules. This process can be continued until final time $T$ is reached. A family of optimization problems formulated in this way is said to be time consistent if the optimal strategies obtained when solving the original problem remain optimal for all subsequent problems. The notion of time consistency, well-known in the field of Economics, has been recently introduced in the context of risk measures, notably by Artzner et al. (2007) and studied in the Stochastic Programming framework by Shapiro (2009) and for Markov Decision Processes...
CMB lens sample covariance and consistency relations
Motloch, Pavel; Hu, Wayne; Benoit-Lévy, Aurélien
2017-02-01
Gravitational lensing information from the two and higher point statistics of the cosmic microwave background (CMB) temperature and polarization fields are intrinsically correlated because they are lensed by the same realization of structure between last scattering and observation. Using an analytic model for lens sample covariance, we show that there is one mode, separately measurable in the lensed CMB power spectra and lensing reconstruction, that carries most of this correlation. Once these measurements become lens sample variance dominated, this mode should provide a useful consistency check between the observables that is largely free of sampling and cosmological parameter errors. Violations of consistency could indicate systematic errors in the data and lens reconstruction or new physics at last scattering, any of which could bias cosmological inferences and delensing for gravitational waves. A second mode provides a weaker consistency check for a spatially flat universe. Our analysis isolates the additional information supplied by lensing in a model-independent manner but is also useful for understanding and forecasting CMB cosmological parameter errors in the extended Λ cold dark matter parameter space of dark energy, curvature, and massive neutrinos. We introduce and test a simple but accurate forecasting technique for this purpose that neither double counts lensing information nor neglects lensing in the observables.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Decomposition of spectra using maximum autocorrelation factors
Larsen, Rasmus
2001-01-01
into classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
A hybrid solar panel maximum power point search method that uses light and temperature sensors
Ostrowski, Mariusz
2016-04-01
Solar cells have low efficiency and non-linear characteristics. To increase the output power solar cells are connected in more complex structures. Solar panels consist of series of connected solar cells with a few bypass diodes, to avoid negative effects of partial shading conditions. Solar panels are connected to special device named the maximum power point tracker. This device adapt output power from solar panels to load requirements and have also build in a special algorithm to track the maximum power point of solar panels. Bypass diodes may cause appearance of local maxima on power-voltage curve when the panel surface is illuminated irregularly. In this case traditional maximum power point tracking algorithms can find only a local maximum power point. In this article the hybrid maximum power point search algorithm is presented. The main goal of the proposed method is a combination of two algorithms: a method that use temperature sensors to track maximum power point in partial shading conditions and a method that use illumination sensor to track maximum power point in equal illumination conditions. In comparison to another methods, the proposed algorithm uses correlation functions to determinate the relationship between values of illumination and temperature sensors and the corresponding values of current and voltage in maximum power point. In partial shading condition the algorithm calculates local maximum power points bases on the value of temperature and the correlation function and after that measures the value of power on each of calculated point choose those with have biggest value, and on its base run the perturb and observe search algorithm. In case of equal illumination algorithm calculate the maximum power point bases on the illumination value and the correlation function and on its base run the perturb and observe algorithm. In addition, the proposed method uses a special coefficient modification of correlation functions algorithm. This sub
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Evaluating Temporal Consistency in Marine Biodiversity Hotspots.
Piacenza, Susan E; Thurman, Lindsey L; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lindsley, Amy J; Nelson, Jake; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Heppell, Selina S
2015-01-01
With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon's diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other
Consistency Relations for the Conformal Mechanism
Creminelli, Paolo; Khoury, Justin; Simonović, Marko
2012-01-01
We systematically derive the consistency relations associated to the non-linearly realized symmetries of theories with spontaneously broken conformal symmetry but with a linearly-realized de Sitter subalgebra. These identities relate (N+1)-point correlation functions with a soft external Goldstone to N-point functions. These relations have direct implications for the recently proposed conformal mechanism for generating density perturbations in the early universe. We study the observational consequences, in particular a novel one-loop contribution to the four-point function, relevant for the stochastic scale-dependent bias and CMB mu-distortion.
Consistency relations for the conformal mechanism
Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151, Trieste (Italy); Joyce, Austin; Khoury, Justin [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: joyceau@sas.upenn.edu, E-mail: jkhoury@sas.upenn.edu, E-mail: marko.simonovic@sissa.it [SISSA, via Bonomea 265, 34136, Trieste (Italy)
2013-04-01
We systematically derive the consistency relations associated to the non-linearly realized symmetries of theories with spontaneously broken conformal symmetry but with a linearly-realized de Sitter subalgebra. These identities relate (N+1)-point correlation functions with a soft external Goldstone to N-point functions. These relations have direct implications for the recently proposed conformal mechanism for generating density perturbations in the early universe. We study the observational consequences, in particular a novel one-loop contribution to the four-point function, relevant for the stochastic scale-dependent bias and CMB μ-distortion.
Improving analytical tomographic reconstructions through consistency conditions
Arcadu, Filippo; Stampanoni, Marco; Marone, Federica
2016-01-01
This work introduces and characterizes a fast parameterless filter based on the Helgason-Ludwig consistency conditions, used to improve the accuracy of analytical reconstructions of tomographic undersampled datasets. The filter, acting in the Radon domain, extrapolates intermediate projections between those existing. The resulting sinogram, doubled in views, is then reconstructed by a standard analytical method. Experiments with simulated data prove that the peak-signal-to-noise ratio of the results computed by filtered backprojection is improved up to 5-6 dB, if the filter is used prior to reconstruction.
Consistency of non-minimal renormalisation schemes
Jack, I
2016-01-01
Non-minimal renormalisation schemes such as the momentum subtraction scheme (MOM) have frequently been used for physical computations. The consistency of such a scheme relies on the existence of a coupling redefinition linking it to MSbar. We discuss the implementation of this procedure in detail for a general theory and show how to construct the relevant redefinition up to three-loop order, for the case of a general theory of fermions and scalars in four dimensions and a general scalar theory in six dimensions.
Gentzen's centenary the quest for consistency
Rathjen, Michael
2015-01-01
Gerhard Gentzen has been described as logic’s lost genius, whom Gödel called a better logician than himself. This work comprises articles by leading proof theorists, attesting to Gentzen’s enduring legacy to mathematical logic and beyond. The contributions range from philosophical reflections and re-evaluations of Gentzen’s original consistency proofs to the most recent developments in proof theory. Gentzen founded modern proof theory. His sequent calculus and natural deduction system beautifully explain the deep symmetries of logic. They underlie modern developments in computer science such as automated theorem proving and type theory.
Consistent Predictions of Future Forest Mortality
McDowell, N. G.
2014-12-01
We examined empirical and model based estimates of current and future forest mortality of conifers in the northern hemisphere. Consistent water potential thresholds were found that resulted in mortality of our case study species, pinon pine and one-seed juniper. Extending these results with IPCC climate scenarios suggests that most existing trees in this region (SW USA) will be dead by 2050. Further, independent estimates of future mortality for the entire coniferous biome suggest widespread mortality by 2100. The validity and assumptions and implications of these results are discussed.
Surface consistent finite frequency phase corrections
Kimman, W. P.
2016-07-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Are there consistent models giving observable NSI ?
Martinez, Enrique Fernandez
2013-01-01
While the existing direct bounds on neutrino NSI are rather weak, order 10(−)(1) for propagation and 10(−)(2) for production and detection, the close connection between these interactions and new NSI affecting the better-constrained charged letpon sector through gauge invariance make these bounds hard to saturate in realistic models. Indeed, Standard Model extensions leading to neutrino NSI typically imply constraints at the 10(−)(3) level. The question of whether consistent models leading to observable neutrino NSI naturally arises and was discussed in a dedicated session at NUFACT 11. Here we summarize that discussion.
Consistent thermodynamic properties of lipids systems
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
Physical and thermodynamic properties of pure components and their mixtures are the basic requirement for process design, simulation, and optimization. In the case of lipids, our previous works[1-3] have indicated a lack of experimental data for pure components and also for their mixtures...... different pressures, with azeotrope behavior observed. Available thermodynamic consistency tests for TPx data were applied before performing parameter regressions for Wilson, NRTL, UNIQUAC and original UNIFAC models. The relevance of enlarging experimental databank of lipids systems data in order to improve...
Consistency Checking of Web Service Contracts
Cambronero, M. Emilia; Okika, Joseph C.; Ravn, Anders Peter
2008-01-01
Behavioural properties are analyzed for web service contracts formulated in Business Process Execution Language (BPEL) and Choreography Description Language (CDL). The key result reported is an automated technique to check consistency between protocol aspects of the contracts. The contracts...... are abstracted to (timed) automata and from there a simulation is set up, which is checked using automated tools for analyzing networks of finite state processes. Here we use the Concurrency Work Bench. The proposed techniques are illustrated with a case study that include otherwise difficult to analyze fault...
Sludge characterization: the role of physical consistency
Spinosa, Ludovico; Wichmann, Knut
2003-07-01
The physical consistency is an important parameter in sewage sludge characterization as it strongly affects almost all treatment, utilization and disposal operations. In addition, in many european Directives a reference to the physical consistency is reported as a characteristic to be evaluated for fulfilling the regulations requirements. Further, in many analytical methods for sludge different procedures are indicated depending on whether a sample is liquid or not, is solid or not. Three physical behaviours (liquid, paste-like and solid) can be observed with sludges, so the development of analytical procedures to define the boundary limit between liquid and paste-like behaviours (flowability) and that between solid and paste-like ones (solidity) is of growing interest. Several devices can be used for evaluating the flowability and solidity properties, but often they are costly and difficult to be operated in the field. Tests have been carried out to evaluate the possibility to adopt a simple extrusion procedure for flowability measurements, and a Vicat needle for solidity ones. (author)
Probability-consistent spectrum and code spectrum
沈建文; 石树中
2004-01-01
In the seismic safety evaluation (SSE) for key projects, the probability-consistent spectrum (PCS), usually obtained from probabilistic seismic hazard analysis (PSHA), is not consistent with the design response spectrum given by Code for Seismic Design of Buildings (GB50011-2001). Sometimes, there may be a remarkable difference between them. If the PCS is lower than the corresponding code design response spectrum (CDS), the seismic fortification criterion for the key projects would be lower than that for the general industry and civil buildings. In the paper, the relation between PCS and CDS is discussed by using the ideal simple potential seismic source. The results show that in the most areas influenced mainly by the potential sources of the epicentral earthquakes and the regional earthquakes, PCS is generally lower than CDS in the long periods. We point out that the long-period response spectra of the code should be further studied and combined with the probability method of seismic zoning as much as possible. Because of the uncertainties in SSE, it should be prudent to use the long-period response spectra given by SSE for key projects when they are lower than CDS.
Consistent mutational paths predict eukaryotic thermostability
van Noort Vera
2013-01-01
Full Text Available Abstract Background Proteomes of thermophilic prokaryotes have been instrumental in structural biology and successfully exploited in biotechnology, however many proteins required for eukaryotic cell function are absent from bacteria or archaea. With Chaetomium thermophilum, Thielavia terrestris and Thielavia heterothallica three genome sequences of thermophilic eukaryotes have been published. Results Studying the genomes and proteomes of these thermophilic fungi, we found common strategies of thermal adaptation across the different kingdoms of Life, including amino acid biases and a reduced genome size. A phylogenetics-guided comparison of thermophilic proteomes with those of other, mesophilic Sordariomycetes revealed consistent amino acid substitutions associated to thermophily that were also present in an independent lineage of thermophilic fungi. The most consistent pattern is the substitution of lysine by arginine, which we could find in almost all lineages but has not been extensively used in protein stability engineering. By exploiting mutational paths towards the thermophiles, we could predict particular amino acid residues in individual proteins that contribute to thermostability and validated some of them experimentally. By determining the three-dimensional structure of an exemplar protein from C. thermophilum (Arx1, we could also characterise the molecular consequences of some of these mutations. Conclusions The comparative analysis of these three genomes not only enhances our understanding of the evolution of thermophily, but also provides new ways to engineer protein stability.
Viewpoint Consistency: An Eye Movement Study
Filipe Cristino
2012-05-01
Full Text Available Eye movements have been widely studied, using images and videos in laboratories or portable eye trackers in the real world. Although a good understanding of the saccadic system and extensive models of gaze have been developed over the years, only a few studies have focused on the consistency of eye movements across viewpoints. We have developed a new technique to compute and map the depth of collected eye movements on stimuli rendered from 3D mesh objects using a traditional corneal reflection eye tracker (SR Eyelink 1000. Having eye movements mapped into 3D space (and not on an image space allowed us to compare fixations across viewpoints. Fixation sequences (scanpaths were also studied across viewpoints using the ScanMatch method (Cristino et al 2010, Behavioural and Research Methods 42, 692–700, extended to work with 3D eye movements. In a set of experiments where participants were asked to perform a recognition task on either a set of objects or faces, we recorded their gaze while performing the task. Participants either viewed the stimuli in 2D or using anaglyph glasses. The stimuli were shown from different viewpoints during the learning and testing phases. A high degree of gaze consistency was found across the different viewpoints, particularly between learning and testing phases. Scanpaths were also similar across viewpoints, suggesting not only that the gazed spatial locations are alike, but also their temporal order.
Subgame consistent cooperation a comprehensive treatise
Yeung, David W K
2016-01-01
Strategic behavior in the human and social world has been increasingly recognized in theory and practice. It is well known that non-cooperative behavior could lead to suboptimal or even highly undesirable outcomes. Cooperation suggests the possibility of obtaining socially optimal solutions and the calls for cooperation are prevalent in real-life problems. Dynamic cooperation cannot be sustainable if there is no guarantee that the agreed upon optimality principle at the beginning is maintained throughout the cooperation duration. It is due to the lack of this kind of guarantees that cooperative schemes fail to last till its end or even fail to get started. The property of subgame consistency in cooperative dynamic games and the corresponding solution mechanism resolve this “classic” problem in game theory. This book is a comprehensive treatise on subgame consistent dynamic cooperation covering the up-to-date state of the art analyses in this important topic. It sets out to provide the theory, solution tec...
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Consistency and sealing of advanced bipolar tissue sealers.
Chekan, Edward G; Davison, Mark A; Singleton, David W; Mennone, John Z; Hinoul, Piet
2015-01-01
The aim of this study was to evaluate two commonly used advanced bipolar devices (ENSEAL(®) G2 Tissue Sealers and LigaSure™ Blunt Tip) for compression uniformity, vessel sealing strength, and consistency in bench-top analyses. Compression analysis was performed with a foam pad/sensor apparatus inserted between closed jaws of the instruments. Average pressures (psi) were recorded across the entire inside surface of the jaws, and over the distal one-third of jaws. To test vessel sealing strength, ex vivo pig carotid arteries were sealed and transected and left and right (sealed) halves of vessels were subjected to burst pressure testing. The maximum bursting pressures of each half of vessels were averaged to obtain single data points for analysis. The absence or presence of tissue sticking to device jaws was noted for each transected vessel. Statistically higher average compression values were found for ENSEAL(®) instruments (curved jaw and straight jaw) compared to LigaSure™, Pbench-top testing, ENSEAL(®) G2 sealers produced more uniform compression, stronger and more consistent vessel sealing, and reduced tissue sticking relative to LigaSure™.
2010-10-01
... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps...
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Consistent evolution in a pedestrian flow
Guan, Junbiao; Wang, Kaihua
2016-03-01
In this paper, pedestrian evacuation considering different human behaviors is studied by using a cellular automaton (CA) model combined with the snowdrift game theory. The evacuees are divided into two types, i.e. cooperators and defectors, and two different human behaviors, herding behavior and independent behavior, are investigated. It is found from a large amount of numerical simulations that the ratios of the corresponding evacuee clusters are evolved to consistent states despite 11 typically different initial conditions, which may largely owe to self-organization effect. Moreover, an appropriate proportion of initial defectors who are of herding behavior, coupled with an appropriate proportion of initial defectors who are of rationally independent thinking, are two necessary factors for short evacuation time.
Consistency of warm k-inflation
Peng, Zhi-Peng; Zhang, Xiao-Min; Zhu, Jian-Yang
2016-01-01
We extend the k-inflation which is a type of kinetically driven inflationary model under the standard inflationary scenario to a possible warm inflationary scenario. The dynamical equations of this warm k-inflation model are obtained. We rewrite the slow-roll parameters which are different from the usual potential driven inflationary models and perform a linear stability analysis to give the proper slow-roll conditions in the warm k-inflation. Two cases, a power-law kinetic function and an exponential kinetic function, are studied, when the dissipative coefficient $\\Gamma=\\Gamma_0$ and $\\Gamma=\\Gamma(\\phi)$, respectively. A proper number of e-folds is obtained in both concrete cases of warm k-inflation. We find a constant dissipative coefficient ($\\Gamma=\\Gamma_0$) is not a workable choice for these two cases while the two cases with $\\Gamma=\\Gamma(\\phi)$ are self-consistent warm inflationary models.
Compact difference approximation with consistent boundary condition
FU Dexun; MA Yanwen; LI Xinliang; LIU Mingyu
2003-01-01
For simulating multi-scale complex flow fields it should be noted that all the physical quantities we are interested in must be simulated well. With limitation of the computer resources it is preferred to use high order accurate difference schemes. Because of their high accuracy and small stencil of grid points computational fluid dynamics (CFD) workers pay more attention to compact schemes recently. For simulating the complex flow fields the treatment of boundary conditions at the far field boundary points and near far field boundary points is very important. According to authors' experience and published results some aspects of boundary condition treatment for far field boundary are presented, and the emphasis is on treatment of boundary conditions for the upwind compact schemes. The consistent treatment of boundary conditions at the near boundary points is also discussed. At the end of the paper are given some numerical examples. The computed results with presented method are satisfactory.
Reliability and Consistency of Surface Contamination Measurements
Rouppert, F.; Rivoallan, A.; Largeron, C.
2002-02-26
Surface contamination evaluation is a tough problem since it is difficult to isolate the radiations emitted by the surface, especially in a highly irradiating atmosphere. In that case the only possibility is to evaluate smearable (removeable) contamination since ex-situ countings are possible. Unfortunately, according to our experience at CEA, these values are not consistent and thus non relevant. In this study, we show, using in-situ Fourier Transform Infra Red spectrometry on contaminated metal samples, that fixed contamination seems to be chemisorbed and removeable contamination seems to be physisorbed. The distribution between fixed and removeable contamination appears to be variable. Chemical equilibria and reversible ion exchange mechanisms are involved and are closely linked to environmental conditions such as humidity and temperature. Measurements of smearable contamination only give an indication of the state of these equilibria between fixed and removeable contamination at the time and in the environmental conditions the measurements were made.
Evaluating the hydrological consistency of evaporation products
López, Oliver
2017-01-18
Advances in space-based observations have provided the capacity to develop regional- to global-scale estimates of evaporation, offering insights into this key component of the hydrological cycle. However, the evaluation of large-scale evaporation retrievals is not a straightforward task. While a number of studies have intercompared a range of these evaporation products by examining the variance amongst them, or by comparison of pixel-scale retrievals against ground-based observations, there is a need to explore more appropriate techniques to comprehensively evaluate remote-sensing-based estimates. One possible approach is to establish the level of product agreement between related hydrological components: for instance, how well do evaporation patterns and response match with precipitation or water storage changes? To assess the suitability of this "consistency"-based approach for evaluating evaporation products, we focused our investigation on four globally distributed basins in arid and semi-arid environments, comprising the Colorado River basin, Niger River basin, Aral Sea basin, and Lake Eyre basin. In an effort to assess retrieval quality, three satellite-based global evaporation products based on different methodologies and input data, including CSIRO-PML, the MODIS Global Evapotranspiration product (MOD16), and Global Land Evaporation: the Amsterdam Methodology (GLEAM), were evaluated against rainfall data from the Global Precipitation Climatology Project (GPCP) along with Gravity Recovery and Climate Experiment (GRACE) water storage anomalies. To ensure a fair comparison, we evaluated consistency using a degree correlation approach after transforming both evaporation and precipitation data into spherical harmonics. Overall we found no persistent hydrological consistency in these dryland environments. Indeed, the degree correlation showed oscillating values between periods of low and high water storage changes, with a phase difference of about 2–3 months
Consistency of canonical formulation of Horava gravity
Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Tainan, Taiwan (China)
2011-09-22
Both the non-projectable and projectable version of Horava gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra graviton mode which can be problematic. A new formulation (based on arXiv:1007.1563) of Horava gravity which is naturally realized as a representation of the master constraint algebra (instead of the Dirac algebra) studied by loop quantum gravity researchers is presented. This formulation yields a consistent canonical theory with first class constraints; and captures the essence of Horava gravity in retaining only spatial diffeomorphisms as the physically relevant non-trivial gauge symmetry. At the same time the local Hamiltonian constraint is equivalently enforced by the master constraint.
Trisomy 21 consistently activates the interferon response.
Sullivan, Kelly D; Lewis, Hannah C; Hill, Amanda A; Pandey, Ahwan; Jackson, Leisa P; Cabral, Joseph M; Smith, Keith P; Liggett, L Alexander; Gomez, Eliana B; Galbraith, Matthew D; DeGregori, James; Espinosa, Joaquín M
2016-07-29
Although it is clear that trisomy 21 causes Down syndrome, the molecular events acting downstream of the trisomy remain ill defined. Using complementary genomics analyses, we identified the interferon pathway as the major signaling cascade consistently activated by trisomy 21 in human cells. Transcriptome analysis revealed that trisomy 21 activates the interferon transcriptional response in fibroblast and lymphoblastoid cell lines, as well as circulating monocytes and T cells. Trisomy 21 cells show increased induction of interferon-stimulated genes and decreased expression of ribosomal proteins and translation factors. An shRNA screen determined that the interferon-activated kinases JAK1 and TYK2 suppress proliferation of trisomy 21 fibroblasts, and this defect is rescued by pharmacological JAK inhibition. Therefore, we propose that interferon activation, likely via increased gene dosage of the four interferon receptors encoded on chromosome 21, contributes to many of the clinical impacts of trisomy 21, and that interferon antagonists could have therapeutic benefits.
On the consistent use of Constructed Observables
Trott, Michael
2015-01-01
We define "constructed observables" as relating experimental measurements to terms in a Lagrangian while simultaneously making assumptions about possible deviations from the Standard Model (SM), in other Lagrangian terms. Ensuring that the SM effective field theory (EFT) is constrained correctly when using constructed observables requires that their defining conditions are imposed on the EFT in a manner that is consistent with the equations of motion. Failing to do so can result in a "functionally redundant" operator basis and the wrong expectation as to how experimental quantities are related in the EFT. We illustrate the issues involved considering the $\\rm S$ parameter and the off shell triple gauge coupling (TGC) verticies. We show that the relationships between $h \\rightarrow V \\bar{f} \\, f$ decay and the off shell TGC verticies are subject to these subtleties, and how the connections between these observables vanish in the limit of strong bounds due to LEP. The challenge of using constructed observables...
Consistently weighted measures for complex network topologies
Heitzig, Jobst; Zou, Yong; Marwan, Norbert; Kurths, Jürgen
2011-01-01
When network and graph theory are used in the study of complex systems, a typically finite set of nodes of the network under consideration is frequently either explicitly or implicitly considered representative of a much larger finite or infinite set of objects of interest. The selection procedure, e.g., formation of a subset or some kind of discretization or aggregation, typically results in individual nodes of the studied network representing quite differently sized parts of the domain of interest. This heterogeneity may induce substantial bias and artifacts in derived network statistics. To avoid this bias, we propose an axiomatic scheme based on the idea of {\\em node splitting invariance} to derive consistently weighted variants of various commonly used statistical network measures. The practical relevance and applicability of our approach is demonstrated for a number of example networks from different fields of research, and is shown to be of fundamental importance in particular in the study of climate n...
Consistent 4-form fluxes for maximal supergravity
Godazgar, Hadi; Krueger, Olaf; Nicolai, Hermann
2015-01-01
We derive new ansaetze for the 4-form field strength of D=11 supergravity corresponding to uplifts of four-dimensional maximal gauged supergravity. In particular, the ansaetze directly yield the components of the 4-form field strength in terms of the scalars and vectors of the four-dimensional maximal gauged supergravity---in this way they provide an explicit uplift of all four-dimensional consistent truncations of D=11 supergravity. The new ansaetze provide a substantially simpler method for uplifting d=4 flows compared to the previously available method using the 3-form and 6-form potential ansaetze. The ansatz for the Freund-Rubin term allows us to conjecture a `master formula' for the latter in terms of the scalar potential of d=4 gauged supergravity and its first derivative. We also resolve a long-standing puzzle concerning the antisymmetry of the flux obtained from uplift ansaetze.
Quantum cosmological consistency condition for inflation
Calcagni, Gianluca [Instituto de Estructura de la Materia, CSIC, calle Serrano 121, 28006 Madrid (Spain); Kiefer, Claus [Institut für Theoretische Physik, Universität zu Köln, Zülpicher Strasse 77, 50937 Köln (Germany); Steinwachs, Christian F., E-mail: calcagni@iem.cfmac.csic.es, E-mail: kiefer@thp.uni-koeln.de, E-mail: christian.steinwachs@physik.uni-freiburg.de [Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Str. 3, 79104 Freiburg (Germany)
2014-10-01
We investigate the quantum cosmological tunneling scenario for inflationary models. Within a path-integral approach, we derive the corresponding tunneling probability distribution. A sharp peak in this distribution can be interpreted as the initial condition for inflation and therefore as a quantum cosmological prediction for its energy scale. This energy scale is also a genuine prediction of any inflationary model by itself, as the primordial gravitons generated during inflation leave their imprint in the B-polarization of the cosmic microwave background. In this way, one can derive a consistency condition for inflationary models that guarantees compatibility with a tunneling origin and can lead to a testable quantum cosmological prediction. The general method is demonstrated explicitly for the model of natural inflation.
Internal Branding and Employee Brand Consistent Behaviours
Mazzei, Alessandra; Ravazzani, Silvia
2017-01-01
Employee behaviours conveying brand values, named brand consistent behaviours, affect the overall brand evaluation. Internal branding literature highlights a knowledge gap in terms of communication practices intended to sustain such behaviours. This study contributes to the development of a non......-normative and constitutive approach to internal branding by proposing an enablement-oriented communication approach. The conceptual background presents a holistic model of the inside-out process of brand building. This model adopts a theoretical approach to internal branding as a nonnormative practice that facilitates...... constitutive processes. In particular, the paper places emphasis on the role and kinds of communication practices as a central part of the nonnormative and constitutive internal branding process. The paper also discusses an empirical study based on interviews with 32 Italian and American communication managers...
Quantum cosmological consistency condition for inflation
Calcagni, Gianluca; Steinwachs, Christian F
2014-01-01
We investigate the quantum cosmological tunneling scenario for inflationary models. Within a path-integral approach, we derive the corresponding tunneling probability distribution. A sharp peak in this distribution can be interpreted as the initial condition for inflation and therefore as a quantum cosmological prediction for its energy scale. This energy scale is also a genuine prediction of any inflationary model by itself, as the primordial gravitons generated during inflation leave their imprint in the B-polarization of the cosmic microwave background. In this way, one can derive a consistency condition for inflationary models that guarantees compatibility with a tunneling origin and can lead to a testable quantum cosmological prediction. The general method is demonstrated explicitly for the model of natural inflation.
Thermodynamically consistent model calibration in chemical kinetics
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
Asymptotic properties of maximum likelihood estimators in models with multiple change points
He, Heping; 10.3150/09-BEJ232
2011-01-01
Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.
The relationship between the Guinea Highlands and the West African offshore rainfall maximum
Hamilton, H. L.; Young, G. S.; Evans, J. L.; Fuentes, J. D.; Núñez Ocasio, K. M.
2017-01-01
Satellite rainfall estimates reveal a consistent rainfall maximum off the West African coast during the monsoon season. An analysis of 16 years of rainfall in the monsoon season is conducted to explore the drivers of such copious amounts of rainfall. Composites of daily rainfall and midlevel meridional winds centered on the days with maximum rainfall show that the day with the heaviest rainfall follows the strongest midlevel northerlies but coincides with peak low-level moisture convergence. Rain type composites show that convective rain dominates the study region. The dominant contribution to the offshore rainfall maximum is convective development driven by the enhancement of upslope winds near the Guinea Highlands. The enhancement in the upslope flow is closely related to African easterly waves propagating off the continent that generate low-level cyclonic vorticity and convergence. Numerical simulations reproduce the observed rainfall maximum and indicate that it weakens if the African topography is reduced.
Investigation on the Maximum Power Point in Solar Panel Characteristics Due to Irradiance Changes
Abdullah, M. A.; Fauziah Toha, Siti; Ahmad, Salmiah
2017-03-01
One of the disadvantages of the photovoltaic module as compared to other renewable resources is the dynamic characteristics of solar irradiance due to inconsistency weather condition and surrounding temperature. Commonly, a photovoltaic power generation systems consist of an embedded control system to maximize the power generation due to the inconsistency in irradiance. In order to improve the simplicity of the power optimization control, this paper present the characteristic of Maximum Power Point with various irradiance levels for Maximum Power Point Tracking (MPPT). The technique requires a set of data from photovoltaic simulation model to be extrapolated as a standard relationship between irradiance and maximum power. The result shows that the relationship between irradiance and maximum power can be represented by a simplified quadratic equation. The first section in your paper
Consistency and sealing of advanced bipolar tissue sealers
Chekan EG
2015-04-01
Full Text Available Edward G Chekan , Mark A Davison, David W Singleton, John Z Mennone, Piet Hinoul Ethicon, Inc., Cincinnati, OH, USA Objectives: The aim of this study was to evaluate two commonly used advanced bipolar devices (ENSEAL® G2 Tissue Sealers and LigaSure™ Blunt Tip for compression uniformity, vessel sealing strength, and consistency in bench-top analyses. Methods: Compression analysis was performed with a foam pad/sensor apparatus inserted between closed jaws of the instruments. Average pressures (psi were recorded across the entire inside surface of the jaws, and over the distal one-third of jaws. To test vessel sealing strength, ex vivo pig carotid arteries were sealed and transected and left and right (sealed halves of vessels were subjected to burst pressure testing. The maximum bursting pressures of each half of vessels were averaged to obtain single data points for analysis. The absence or presence of tissue sticking to device jaws was noted for each transected vessel. Results: Statistically higher average compression values were found for ENSEAL® instruments (curved jaw and straight jaw compared to LigaSure™, P<0.05. Moreover, the ENSEAL® devices retained full compression at the distal end of jaws. Significantly higher and more consistent median burst pressures were noted for ENSEAL® devices relative to LigaSure™ through 52 firings of each device (P<0.05. LigaSure™ showed a significant reduction in median burst pressure for the final three firings (cycles 50–52 versus the first three firings (cycles 1–3, P=0.027. Tissue sticking was noted for 1.39% and 13.3% of vessels transected with ENSEAL® and LigaSure™, respectively. Conclusion: In bench-top testing, ENSEAL® G2 sealers produced more uniform compression, stronger and more consistent vessel sealing, and reduced tissue sticking relative to LigaSure™. Keywords: ENSEAL, sealing, burst pressure, laparoscopic, compression, LigaSure
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method.
Roux, Benoît; Weare, Jonathan
2013-02-28
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method.
Jie Li DING; Xi Ru CHEN
2006-01-01
For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Consistent lattice Boltzmann equations for phase transitions.
Siebert, D N; Philippi, P C; Mattila, K K
2014-11-01
Unlike conventional computational fluid dynamics methods, the lattice Boltzmann method (LBM) describes the dynamic behavior of fluids in a mesoscopic scale based on discrete forms of kinetic equations. In this scale, complex macroscopic phenomena like the formation and collapse of interfaces can be naturally described as related to source terms incorporated into the kinetic equations. In this context, a novel athermal lattice Boltzmann scheme for the simulation of phase transition is proposed. The continuous kinetic model obtained from the Liouville equation using the mean-field interaction force approach is shown to be consistent with diffuse interface model using the Helmholtz free energy. Density profiles, interface thickness, and surface tension are analytically derived for a plane liquid-vapor interface. A discrete form of the kinetic equation is then obtained by applying the quadrature method based on prescribed abscissas together with a third-order scheme for the discretization of the streaming or advection term in the Boltzmann equation. Spatial derivatives in the source terms are approximated with high-order schemes. The numerical validation of the method is performed by measuring the speed of sound as well as by retrieving the coexistence curve and the interface density profiles. The appearance of spurious currents near the interface is investigated. The simulations are performed with the equations of state of Van der Waals, Redlich-Kwong, Redlich-Kwong-Soave, Peng-Robinson, and Carnahan-Starling.
Exploring the Consistent behavior of Information Services
Kapidakis Sarantos
2016-01-01
Full Text Available Computer services are normally assumed to work well all the time. This usually happens for crucial services like bank electronic services, but not necessarily so for others, that there is no commercial interest in their operation. In this work we examined the operation and the errors of information services and tried to find clues that will help predicting the consistency of the behavior and the quality of the harvesting, which is harder because of the transient conditions and the many services and the huge amount of harvested information. We found many unexpected situations. The services that always successfully satisfy a request may in fact return part of it. A significant part of the OAI services have ceased working while many other serves occasionally fail to respond. Some services fail in the same way each time, and we pronounce them dead, as we do not see a way to overcome that. Others also always, or sometimes fail, but not in the same way, and we hope that their behavior is affected by temporary factors, that may improve later on. We categorized the services into classes, to study their behavior in more detail.
Consistent quadrupole-octupole collective model
Dobrowolski, A.; Mazurek, K.; Góźdź, A.
2016-11-01
Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.
A Consistent Phylogenetic Backbone for the Fungi
Ebersberger, Ingo; de Matos Simoes, Ricardo; Kupczok, Anne; Gube, Matthias; Kothe, Erika; Voigt, Kerstin; von Haeseler, Arndt
2012-01-01
The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data—a common practice in phylogenomic analyses—introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. PMID:22114356
Volume Haptics with Topology-Consistent Isosurfaces.
Corenthy, Loc; Otaduy, Miguel A; Pastor, Luis; Garcia, Marcos
2015-01-01
Haptic interfaces offer an intuitive way to interact with and manipulate 3D datasets, and may simplify the interpretation of visual information. This work proposes an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is achieved using a continuous collision detection step coupled with state-of-the-art proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the consistency between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Our experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm.
Consistency between GRUAN sondes, LBLRTM and IASI
X. Calbet
2017-06-01
Full Text Available Radiosonde soundings from the GCOS Reference Upper-Air Network (GRUAN data record are shown to be consistent with Infrared Atmospheric Sounding Instrument (IASI-measured radiances via LBLRTM (Line-By-Line Radiative Transfer Model in the part of the spectrum that is mostly affected by water vapour absorption in the upper troposphere (from 700 hPa up. This result is key for climate data records, since GRUAN, IASI and LBLRTM constitute reference measurements or a reference radiative transfer model in each of their fields. This is specially the case for night-time radiosonde measurements. Although the sample size is small (16 cases, daytime GRUAN radiosonde measurements seem to have a small dry bias of 2.5 % in absolute terms of relative humidity, located mainly in the upper troposphere, with respect to LBLRTM and IASI. Full metrological closure is not yet possible and will not be until collocation uncertainties are better characterized and a full uncertainty covariance matrix is clarified for GRUAN.
Retrocausation, Consistency, and the Bilking Paradox
Dobyns, York H.
2011-11-01
Retrocausation seems to admit of time paradoxes in which events prevent themselves from occurring and thereby create a physical instance of the liar's paradox, an event which occurs iff it does not occur. The specific version in which a retrocausal event is used to trigger an intervention which prevents its own future cause is called the bilking paradox (the event is bilked of its cause). The analysis of Echeverria, Klinkhammer, and Thorne (EKT) suggests time paradoxes cannot arise even in the presence of retrocausation. Any self-contradictory event sequence will be replaced in reality by a closely related but noncontradictory sequence. The EKT analysis implies that attempts to create bilking must instead produce logically consistent sequences wherein the bilked event arises from alternative causes. Bilking a retrocausal information channel of limited reliability usually results only in failures of signaling. An exception applies when the bilking is conducted in response only to some of the signal values that can be carried on the channel. Theoretical analysis based on EKT predicts that, since some of the channel outcomes are not bilked, the channel is capable of transmitting data with its normal reliability, and the paradox-avoidance effects will instead suppress the outcomes that would lead to forbidden (bilked) transmissions. A recent parapsychological experiment by Bem displays a retrocausal information channel of sufficient reliability to test this theoretical model of physical reality's response to retrocausal effects. A modified version with partial bilking would provide a direct test of the generality of the EKT mechanism.
Ciliate communities consistently associated with coral diseases
Sweet, M. J.; Séré, M. G.
2016-07-01
Incidences of coral disease are increasing. Most studies which focus on diseases in these organisms routinely assess variations in bacterial associates. However, other microorganism groups such as viruses, fungi and protozoa are only recently starting to receive attention. This study aimed at assessing the diversity of ciliates associated with coral diseases over a wide geographical range. Here we show that a wide variety of ciliates are associated with all nine coral diseases assessed. Many of these ciliates such as Trochilia petrani and Glauconema trihymene feed on the bacteria which are likely colonizing the bare skeleton exposed by the advancing disease lesion or the necrotic tissue itself. Others such as Pseudokeronopsis and Licnophora macfarlandi are common predators of other protozoans and will be attracted by the increase in other ciliate species to the lesion interface. However, a few ciliate species (namely Varistrombidium kielum, Philaster lucinda, Philaster guamense, a Euplotes sp., a Trachelotractus sp. and a Condylostoma sp.) appear to harbor symbiotic algae, potentially from the coral themselves, a result which may indicate that they play some role in the disease pathology at the very least. Although, from this study alone we are not able to discern what roles any of these ciliates play in disease causation, the consistent presence of such communities with disease lesion interfaces warrants further investigation.
Campbell, Cara; Hilderbrand, Robert H.
2017-01-01
Species distribution modelling can be useful for the conservation of rare and endangered species. Freshwater mussel declines have thinned species ranges producing spatially fragmented distributions across large areas. Spatial fragmentation in combination with a complex life history and heterogeneous environment makes predictive modelling difficult.A machine learning approach (maximum entropy) was used to model occurrences and suitable habitat for the federally endangered dwarf wedgemussel, Alasmidonta heterodon, in Maryland's Coastal Plain catchments. Landscape-scale predictors (e.g. land cover, land use, soil characteristics, geology, flow characteristics, and climate) were used to predict the suitability of individual stream segments for A. heterodon.The best model contained variables at three scales: minimum elevation (segment scale), percentage Tertiary deposits, low intensity development, and woody wetlands (sub-catchment), and percentage low intensity development, pasture/hay agriculture, and average depth to the water table (catchment). Despite a very small sample size owing to the rarity of A. heterodon, cross-validated prediction accuracy was 91%.Most predicted suitable segments occur in catchments not known to contain A. heterodon, which provides opportunities for new discoveries or population restoration. These model predictions can guide surveys toward the streams with the best chance of containing the species or, alternatively, away from those streams with little chance of containing A. heterodon.Developed reaches had low predicted suitability for A. heterodon in the Coastal Plain. Urban and exurban sprawl continues to modify stream ecosystems in the region, underscoring the need to preserve existing populations and to discover and protect new populations.