WorldWideScience

Sample records for model selection hypothesis

  1. Evolution of female multiple mating: A quantitative model of the “sexually selected sperm” hypothesis

    Science.gov (United States)

    Bocedi, Greta; Reid, Jane M

    2015-01-01

    Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405

  2. Socioeconomic inequality in health in the British household panel: Tests of the social causation, health selection and the indirect selection hypothesis using dynamic fixed effects panel models.

    Science.gov (United States)

    Foverskov, Else; Holm, Anders

    2016-02-01

    Despite social inequality in health being well documented, it is still debated which causal mechanism best explains the negative association between socioeconomic position (SEP) and health. This paper is concerned with testing the explanatory power of three widely proposed causal explanations for social inequality in health in adulthood: the social causation hypothesis (SEP determines health), the health selection hypothesis (health determines SEP) and the indirect selection hypothesis (no causal relationship). We employ dynamic data of respondents aged 30 to 60 from the last nine waves of the British Household Panel Survey. Household income and location on the Cambridge Scale is included as measures of different dimensions of SEP and health is measured as a latent factor score. The causal hypotheses are tested using a time-based Granger approach by estimating dynamic fixed effects panel regression models following the method suggested by Anderson and Hsiao. We propose using this method to estimate the associations over time since it allows one to control for all unobserved time-invariant factors and hence lower the chances of biased estimates due to unobserved heterogeneity. The results showed no proof of the social causation hypothesis over a one to five year period and limited support for the health selection hypothesis was seen only for men in relation to HH income. These findings were robust in multiple sensitivity analysis. We conclude that the indirect selection hypothesis may be the most important in explaining social inequality in health in adulthood, indicating that the well-known cross-sectional correlations between health and SEP in adulthood seem not to be driven by a causal relationship, but instead by dynamics and influences in place before the respondents turn 30 years old that affect both their health and SEP onwards. The conclusion is limited in that we do not consider the effect of specific diseases and causal relationships in adulthood may be

  3. The mitonuclear compatibility hypothesis of sexual selection

    Science.gov (United States)

    Hill, Geoffrey E.; Johnson, James D.

    2013-01-01

    Why females assess ornaments when choosing mates remains a central question in evolutionary biology. We hypothesize that the imperative for a choosing female to find a mate with nuclear oxidative phosphorylation (OXPHOS) genes that are compatible with her mitochondrial OXPHOS genes drives the evolution of ornaments. Indicator traits are proposed to signal the efficiency of OXPHOS function thus enabling females to select mates with nuclear genes that are compatible with maternal mitochondrial genes in the formation of OXPHOS complexes. Species-typical pattern of ornamentation is proposed to serve as a marker of mitochondrial type ensuring that females assess prospective mates with a shared mitochondrial background. The mitonuclear compatibility hypothesis predicts that the production of ornaments will be closely linked to OXPHOS pathways, and that sexual selection for compatible mates will be strongest when genes for nuclear components of OXPHOS complexes are Z-linked. The implications of this hypothesis are that sexual selection may serve as a driver for the evolution of more efficient cellular respiration. PMID:23945683

  4. Confluence Model or Resource Dilution Hypothesis?

    DEFF Research Database (Denmark)

    Jæger, Mads

    Studies on family background often explain the negative effect of sibship size on educational attainment by one of two theories: the Confluence Model (CM) or the Resource Dilution Hypothesis (RDH). However, as both theories – for substantively different reasons – predict that sibship size should...

  5. Cohabitation and Divorce in Canada: Testing the Selectivity Hypothesis.

    Science.gov (United States)

    Hall, David R.; Zhao, John Z.

    1995-01-01

    Investigated hypothesis that cohabitors are a select group in ways that predispose them to divorce. Found that premarital cohabitation was associated with a greater risk of divorce even after accounting for the effects of parental divorce, marital status of first spouse, age heterogamy, and the presence of stepchildren. (RJM)

  6. Reproductive isolation, reproductive mode, and sexual selection: empirical tests of the viviparity-driven conflict hypothesis.

    Science.gov (United States)

    Coleman, Seth W; Harlin-Cognato, April; Jones, Adam G

    2009-03-01

    A central goal in evolutionary biology is to elucidate general mechanisms and patterns of species divergence. The viviparity-driven conflict (VDC) hypothesis posits that intense mother-embryo conflict associated with viviparity drives rapid reproductive isolation among viviparous species, is intensified by multiple paternity, and reduces female reliance on precopulatory cues in mate choice. We tested these predictions using comparisons of oviparous and viviparous fishes. Consistent with the VDC hypothesis, we found that, relative to oviparous species, only closely related viviparous fishes are known to hybridize. Also in support of the VDC hypothesis, we found that (1) elaborate male sexual ornamentation may be more common in viviparous species with relatively low levels of maternal provisioning of embryos compared with those with high levels of provisioning and (2) the degree of multiple paternity is higher in viviparous species than in oviparous species. In contrast to a prediction of the VDC hypothesis, we found no relationship between the degree of multiple paternity and elaborate male sexual ornamentation, although statistical power was quite low for this test. Whereas overall our results strongly support the central tenet of the VDC hypothesis-that reproductive mode affects rates of evolution of reproductive isolation and the strength of sexual selection-they cannot rule out two alternative models we propose that may also explain the observed patterns.

  7. An efficient coding hypothesis links sparsity and selectivity of neural responses.

    Directory of Open Access Journals (Sweden)

    Florian Blättler

    Full Text Available To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS and few renditions of conspecific birds' songs (CONs. During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses.

  8. Testing the cultural group selection hypothesis in Northern Ghana and Oaxaca.

    Science.gov (United States)

    Acedo-Carmona, Cristina; Gomila, Antoni

    2016-01-01

    We examine the cultural group selection (CGS) hypothesis in light of our fieldwork in Northern Ghana and Oaxaca, highly multi-ethnic regions. Our evidence fails to corroborate two central predictions of the hypothesis: that the cultural group is the unit of evolution, and that cultural homogenization is to be expected as the outcome of a selective process.

  9. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  10. Interactive comparison of hypothesis tests for statistical model checking

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Reijsbergen, D.P.; Scheinhardt, Willem R.W.

    2015-01-01

    We present a web-based interactive comparison of hypothesis tests as are used in statistical model checking, providing users and tool developers with more insight into their characteristics. Parameters can be modified easily and their influence is visualized in real time; an integrated simulation

  11. Models for $(\\infty, n)$-categories and the cobordism hypothesis

    CERN Document Server

    Bergner, Julia E

    2010-01-01

    In this paper we introduce the models for $(\\infty, n)$-categories which have been developed to date, as well as the comparisons between them that are known and conjectured. We review the role of $(\\infty, n)$-categories in the proof of the Cobordism Hypothesis.

  12. Sexual selection on land snail shell ornamentation: a hypothesis that may explain shell diversity

    Directory of Open Access Journals (Sweden)

    Schilthuizen Menno

    2003-06-01

    Full Text Available Abstract Background Many groups of land snails show great interspecific diversity in shell ornamentation, which may include spines on the shell and flanges on the aperture. Such structures have been explained as camouflage or defence, but the possibility that they might be under sexual selection has not previously been explored. Presentation of the hypothesis The hypothesis that is presented consists of two parts. First, that shell ornamentation is the result of sexual selection. Second, that such sexual selection has caused the divergence in shell shape in different species. Testing the hypothesis The first part of the hypothesis may be tested by searching for sexual dimorphism in shell ornamentation in gonochoristic snails, by searching for increased variance in shell ornamentation relative to other shell traits, and by mate choice experiments using individuals with experimentally enhanced ornamentation. The second part of the hypothesis may be tested by comparing sister groups and correlating shell diversity with degree of polygamy. Implications of the hypothesis If the hypothesis were true, it would provide an explanation for the many cases of allopatric evolutionary radiation in snails, where shell diversity cannot be related to any niche differentiation or environmental differences.

  13. The linear model and hypothesis a general unifying theory

    CERN Document Server

    Seber, George

    2015-01-01

    This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory. The approach used is a geometrical one based on the concept of projections and their associated idempotent matrices, thus largely avoiding the need to involve matrix ranks. It is shown that all the hypotheses encountered are either linear or asymptotically linear, and that all the underlying models used are either exactly or asymptotically linear normal models. This equivalence can be used, for example, to extend the concept of orthogonality in the analysis of variance to other models, and to show that the asymptotic equivalence of the likelihood ratio, Wald, and Score (Lagrange Multiplier) hypothesis tests generally applies.

  14. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    Science.gov (United States)

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  15. Modelling wind turbine wakes using the turbulent entrainment hypothesis

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo

    2015-11-01

    Simple models for turbine wakes have been used extensively in the wind energy community, both as independent tools, as well as to complement more refined and computationally-intensive techniques. Jensen (1983; see also Katić et al. 1986) developed a model assuming that the wake radius grows linearly with distance x, approximating the velocity deficit with a top-hat profile. While this model has been widely implemented in the wind energy community, recently Bastankhah & Porté-Agel (2014) showed that it does not conserve momentum. They proposed a momentum-conserving theory, which assumed a Gaussian velocity deficit and retained the linear-spreading assumption, significantly improving agreement with experiments and LES. While the linear spreading assumption facilitates conceptual modeling, it requires empirical estimates of the spreading rate, and does not readily enable generalizations to other turbine designs. Furthermore, field measurements show sub-linear wake growth with x in the far-wake, consistently with results from fundamental turbulence studies. We develop a model by relying on a simple and general turbulence parameterization, namely the entrainment hypothesis, which has been used extensively in other areas of geophysical fluid dynamics. Without assuming similarity, we derive an analytical solution for a circular turbine wake, which predicts a far-wake radius increasing with x 1 / 3, and is consistent with field measurements and fundamental turbulence studies. Finally, we discuss developments accounting for effects of stratification, as well as generalizations to other turbine designs.

  16. The evolution of autistic-like and schizotypal traits: A sexual selection hypothesis

    Directory of Open Access Journals (Sweden)

    Marco Del Giudice

    2010-08-01

    Full Text Available In this paper we present a new hypothesis on the evolution of autistic-like and schizotypal personality traits. We argue that autistic-like and schizotypal traits contribute in opposite ways to individual differences in reproductive and mating strategies, and have been maintained – at least in part – by sexual selection through mate choice. Whereas positive schizotypy can be seen as a psychological phenotype oriented to high mating effort and good genes displays in both sexes, autistic-like traits in their non-pathological form contribute to a male-typical strategy geared toward high parental investment, low mating effort, and long-term resource allocation. At the evolutionary-genetic level, this sexual selection hypothesis is consistent with Crespi and Badcock’s “imprinted brain” theory of autism and psychosis; the effect of offspring mating behavior on resource flow within the family connects sexual selection with genomic imprinting in the context of human biparental care. We conclude by presenting the results of an empirical study testing one of the predictions derived from our hypothesis. In a sample of 200 college students, autistic-like traits predicted lower interest in short-term mating, higher partner-specific investment, and stronger commitment to long-term romantic relations, whereas positive schizotypy showed the opposite pattern of effects.

  17. A new selection method for high-dimensionial instrumental setting: application to the Growth Rate convergence hypothesis

    CERN Document Server

    Mougeot, Mathilde; Tribouley, Karine

    2011-01-01

    This paper investigates the problem of selecting variables in regression-type models for an "instrumental" setting. Our study is motivated by empirically verifying the conditional convergence hypothesis used in the economical literature concerning the growth rate. To avoid unnecessary discussion about the choice and the pertinence of instrumental variables, we embed the model in a very high dimensional setting. We propose a selection procedure with no optimization step called LOLA, for Learning Out of Leaders with Adaptation. LOLA is an auto-driven algorithm with two thresholding steps. The consistency of the procedure is proved under sparsity conditions and simulations are conducted to illustrate the practical good performances of LOLA. The behavior of the algorithm is studied when instrumental variables are artificially added without a priori significant connection to the model. Using our algorithm, we provide a solution for modeling the link between the growth rate and the initial level of the gross domest...

  18. The truthful signalling hypothesis: an explicit general equilibrium model.

    Science.gov (United States)

    Hausken, Kjell; Hirshleifer, Jack

    2004-06-21

    In mating competition, the truthful signalling hypothesis (TSH), sometimes known as the handicap principle, asserts that higher-quality males signal while lower-quality males do not (or else emit smaller signals). Also, the signals are "believed", that is, females mate preferentially with higher-signalling males. Our analysis employs specific functional forms to generate analytic solutions and numerical simulations that illuminate the conditions needed to validate the TSH. Analytic innovations include: (1) A Mating Success Function indicates how female mating choices respond to higher and lower signalling levels. (2) A congestion function rules out corner solutions in which females would mate exclusively with higher-quality males. (3) A Malthusian condition determines equilibrium population size as related to per-capita resource availability. Equilibria validating the TSH are achieved over a wide range of parameters, though not universally. For TSH equilibria it is not strictly necessary that the high-quality males have an advantage in terms of lower per-unit signalling costs, but a cost difference in favor of the low-quality males cannot be too great if a TSH equilibrium is to persist. And although the literature has paid less attention to these points, TSH equilibria may also fail if: the quality disparity among males is too great, or the proportion of high-quality males in the population is too large, or if the congestion effect is too weak. Signalling being unprofitable in aggregate, it can take off from a no-signalling equilibrium only if the trait used for signalling is not initially a handicap, but instead is functionally useful at low levels. Selection for this trait sets in motion a bandwagon, whereby the initially useful indicator is pushed by male-male competition into the domain where it does indeed become a handicap.

  19. Model Selection for Geostatistical Models

    Energy Technology Data Exchange (ETDEWEB)

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  20. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  1. Patterns of coral bleaching: Modeling the adaptive bleaching hypothesis

    Science.gov (United States)

    Ware, J.R.; Fautin, D.G.; Buddemeier, R.W.

    1996-01-01

    Bleaching - the loss of symbiotic dinoflagellates (zooxanthellae) from animals normally possessing them - can be induced by a variety of stresses, of which temperature has received the most attention. Bleaching is generally considered detrimental, but Buddemeier and Fautin have proposed that bleaching is also adaptive, providing an opportunity for recombining hosts with alternative algal types to form symbioses that might be better adapted to altered circumstances. Our mathematical model of this "adaptive bleaching hypothesis" provides insight into how animal-algae symbioses might react under various circumstances. It emulates many aspects of the coral bleaching phenomenon including: corals bleaching in response to a temperature only slightly greater than their average local maximum temperature; background bleaching; bleaching events being followed by bleaching of lesser magnitude in the subsequent one to several years; higher thermal tolerance of corals subject to environmental variability compared with those living under more constant conditions; patchiness in bleaching; and bleaching at temperatures that had not previously resulted in bleaching. ?? 1996 Elsevier Science B.V. All rights reserved.

  2. The fragile Y hypothesis: Y chromosome aneuploidy as a selective pressure in sex chromosome and meiotic mechanism evolution.

    Science.gov (United States)

    Blackmon, Heath; Demuth, Jeffery P

    2015-09-01

    Loss of the Y-chromosome is a common feature of species with chromosomal sex determination. However, our understanding of why some lineages frequently lose Y-chromosomes while others do not is limited. The fragile Y hypothesis proposes that in species with chiasmatic meiosis the rate of Y-chromosome aneuploidy and the size of the recombining region have a negative correlation. The fragile Y hypothesis provides a number of novel insights not possible under traditional models. Specifically, increased rates of Y aneuploidy may impose positive selection for (i) gene movement off the Y; (ii) translocations and fusions which expand the recombining region; and (iii) alternative meiotic segregation mechanisms (achiasmatic or asynaptic). These insights as well as existing evidence for the frequency of Y-chromosome aneuploidy raise doubt about the prospects for long-term retention of the human Y-chromosome despite recent evidence for stable gene content in older non-recombining regions.

  3. Environmental origins of sexually selected variation and a critique of the fluctuating asymmetry-sexual selection hypothesis.

    Science.gov (United States)

    Polak, Michal; Starmer, William T

    2005-03-01

    Identifying sources of phenotypic variability in secondary sexual traits is critical for understanding their signaling properties, role in sexual selection, and for predicting their evolutionary dynamics. The present study tests for the effects of genotype, developmental temperature, and their interaction, on size and fluctuating asymmetry of the male sex comb, a secondary sexual character, in Drosophila bipectinata Duda. Both the size and symmetry of elements of the sex comb have been shown previously to be under sexual selection in a natural population in northeastern Australia. Two independent reciprocal crosses were conducted at 25 degrees and 29 degrees C between genetic lines extracted from this population that differed in the size of the first (TC1) and third (TC3) comb segments. These temperatures are within the documented range experienced by the species in nature. Additive and dominance genetic effects were detected for TC1, whereas additive genetic, and Y-chromosomal effects were detected for TC3. TC2 and TC3 decreased sharply with increasing temperature, by 10% and 22%, respectively. In contrast, positional fluctuating asymmetry (PFA) significantly increased with temperature, by up to 38%. The results (1) document an important source of environmental variance in a sexual ornament expected to reduce trait heritability in field populations, and thus act to attenuate response to sexual selection, (2) suggest that variation in ornament size reflects differences in male condition; and (3) support the general hypothesis that asymmetry in a sexual ornament is indicative of developmental instability arising from environmental stress. The "environmental heterogeneity" (EH) hypothesis is proposed, and supportive evidence for it presented, to explain negative size-FA correlations in natural populations. Data and theory challenge the use of negative size-FA correlations observed in nature to support the FA-sexual selection hypothesis, which posits that such

  4. Did sexual selection shape human music? Testing predictions from the sexual selection hypothesis of music evolution using a large genetically informative sample of over 10,000 twins

    NARCIS (Netherlands)

    Mosing, M.A.; Verweij, K.J.H.; Madison, G.; Pedersen, N.L.; Zietsch, B.P.; Ullén, F.

    2015-01-01

    Although music is a universal feature of human culture, little is known about its origins and functions. A prominent theory of music evolution is the sexual selection hypothesis, which proposes that music evolved as a signal of genetic quality to potential mates. The sexual selection hypothesis offe

  5. Did sexual selection shape human music? Testing predictions from the sexual selection hypothesis of music evolution using a large genetically informative sample of over 10,000 twins

    NARCIS (Netherlands)

    Mosing, M.A.; Verweij, K.J.H.; Madison, G.; Pedersen, N.L.; Zietsch, B.P.; Ullén, F.

    2015-01-01

    Although music is a universal feature of human culture, little is known about its origins and functions. A prominent theory of music evolution is the sexual selection hypothesis, which proposes that music evolved as a signal of genetic quality to potential mates. The sexual selection hypothesis

  6. A large scale hydrological model combining Budyko hypothesis and stochastic soil moisture model

    Science.gov (United States)

    Cong, Z.; Zhang, X.

    2012-04-01

    Based on the Budyko hypothesis, the actual evapotranspiration, E,is controlled by the water conditions and the energy conditions, which are represented by the amount of annual precipitation, P and potential evaporation, E0, respectively. Some theoretical or empirical equations have been proposed to represent the Budyko curve. We here select Choudhury's equation to describe the Budyko curve (Mezentsev, 1954; Choudhury, 1999; Yang et al., 2008; Roderick and Farquhar, 2011). ɛ = (1+ φ -α)-1/α ,ɛ = E-,φ = E0 P P Rodriguez-Iturbe et al. (1999) proposed a stochastic soil moisture model based on a Poisson distributed rainfall assumption. Porporato et al. (2004) described the average water balance based on stochastic soil moisture model as following, γ- 1 ɛ = 1 -φ(·γ)φ--(·e-γ),γ = Zr- Γ γ- - Γ γ-,γ h φ φ where, h means the average rainfall depth, Zr means basin water storage ability. Combining these two equation, we can get the relation between α and γ. Then we develop a large scale hydrological model to estimate annual runoff from P, E0, h and Zr. ( -α)- 1/α 0.5946 Zr- R = (1- ɛ)P,ɛ = 1+ φ ,a = 0.7078γ ,γ = h This method has good performance when it is applied to estimate annual runoff in the Yellow River Basin and the Yangtze River Basin. The impacts of climate changes (P, E0 and h) and human activities (Zr) are also discussed with this method.

  7. Recruiter Selection Model

    Science.gov (United States)

    2006-05-01

    interests include feature selection, statistical learning, multivariate statistics, market research, and classification. He may be contacted at...current youth market , and reducing barriers to Army enlistment. Part of the Army Recruiting Initiatives was the creation of a recruiter selection...Selection Model DevelPed by the Openuier Reseach Crate of E...lneSstm Erapseeeng Depce-teo, WViitd Ntt. siliec Academy, NW..t Point, 271 Weau/’itt 21M

  8. Reproductive Contributions of Cardinals Are Consistent with a Hypothesis of Relaxed Selection in Urban Landscapes

    Directory of Open Access Journals (Sweden)

    Amanda D. Rodewald

    2017-07-01

    Full Text Available Human activities are leading to rapid environmental change globally and may affect the eco-evolutionary dynamics of species inhabiting human-dominated landscapes. Theory suggests that increases in environmental heterogeneity should promote variation in reproductive performance among individuals. At the same time, we know that novel environments, such as our urbanizing study system, may represent more benign or predictable environments due to resource subsidies and ecological changes. We tested the hypothesis that reduced environmental heterogeneity and enhanced resource availability in cities relax selective pressures on birds by testing if urban females vary less than rural females in their demographic contributions to local populations. From 2004 to 2014, we monitored local population densities and annual reproductive output of 470 female Northern Cardinals (Cardinalis cardinalis breeding at 14 forested sites distributed across a rural-to-urban landscape gradient in Ohio, USA. Reproductive contribution was measured as the difference between individual and site-averaged annual reproductive output across all nesting attempts, divided by the annual density at each site. We show that among-individual variation in reproductive contribution to the next year's population declined with increasing urbanization, despite similar variability in body condition across the rural-urban gradient. Thus, female cardinals that bred in urban habitats within our study area were more similar in their contribution to the next generation than rural breeders, where a pattern of winners and losers was more evident. Within-individual variation in annual reproductive contribution also declined with increasing urbanization, indicating that performance of females was also more consistent among years in urban than rural landscapes. These findings are consistent with the hypothesis that urbanized environments offer more homogeneous or predictable conditions that may buffer

  9. Applications of abduction: hypothesis testing of neuroendocrinological qualitative compartmental models.

    Science.gov (United States)

    Menzies, T; Compton, P

    1997-06-01

    It is difficult to assess hypothetical models in poorly measured domains such as neuroendocrinology. Without a large library of observations to constrain inference, the execution of such incomplete models implies making assumptions. Mutually exclusive assumptions must be kept in separate worlds. We define a general abductive multiple-worlds engine that assesses such models by (i) generating the worlds and (ii) tests if these worlds contain known behaviour. World generation is constrained via the use of relevant envisionment. We describe QCM, a modeling language for compartmental models that can be processed by this inference engine. This tool has been used to find faults in theories published in international refereed journals; i.e. QCM can detect faults which are invisible to other methods. The generality and computational limits of this approach are discussed. In short, this approach is applicable to any representation that can be compiled into an and-or graph, provided the graphs are not too big or too intricate (fanout < 7).

  10. A Dual-Process Discrete-Time Survival Analysis Model: Application to the Gateway Drug Hypothesis

    Science.gov (United States)

    Malone, Patrick S.; Lamis, Dorian A.; Masyn, Katherine E.; Northrup, Thomas F.

    2010-01-01

    The gateway drug model is a popular conceptualization of a progression most substance users are hypothesized to follow as they try different legal and illegal drugs. Most forms of the gateway hypothesis are that "softer" drugs lead to "harder," illicit drugs. However, the gateway hypothesis has been notably difficult to directly test--that is, to…

  11. Information Processing Strategies in Counselor Hypothesis Testing: The Role of Selective Memory and Expectancy.

    Science.gov (United States)

    Strohmer, Douglas C.; And Others

    1990-01-01

    Explored issue of confirmatory bias in counselors' clinical hypothesis testing by examining the way counselors (n=84) remembered information about a client. Results indicated counselors remembered more confirmatory than discomfirmatory information. Suggests counselors need to be aware of these biases and should be trained to avoid them.…

  12. A mathematical model of weight loss under total starvation: evidence against the thrifty-gene hypothesis

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2013-01-01

    The thrifty-gene hypothesis (TGH posits that the modern genetic predisposition to obesity stems from a historical past where famine selected for genes that promote efficient fat deposition. It has been previously argued that such a scenario is unfeasible because under such strong selection any gene favouring fat deposition would rapidly move to fixation. Hence, we should all be predisposed to obesity: which we are not. The genetic architecture of obesity that has been revealed by genome-wide association studies (GWAS, however, calls into question such an argument. Obesity is caused by mutations in many hundreds (maybe thousands of genes, each with a very minor, independent and additive impact. Selection on such genes would probably be very weak because the individual advantages they would confer would be very small. Hence, the genetic architecture of the epidemic may indeed be compatible with, and hence support, the TGH. To evaluate whether this is correct, it is necessary to know the likely effects of the identified GWAS alleles on survival during starvation. This would allow definition of their advantage in famine conditions, and hence the likely selection pressure for such alleles to have spread over the time course of human evolution. We constructed a mathematical model of weight loss under total starvation using the established principles of energy balance. Using the model, we found that fatter individuals would indeed survive longer and, at a given body weight, females would survive longer than males, when totally starved. An allele causing deposition of an extra 80 g of fat would result in an extension of life under total starvation by about 1.1–1.6% in an individual with 10 kg of fat and by 0.25–0.27% in an individual carrying 32 kg of fat. A mutation causing a per allele effect of 0.25% would become completely fixed in a population with an effective size of 5 million individuals in 6000 selection events. Because there have probably been about 24

  13. A pilot study for the analysis of dream reports using Maslow's need categories: an extension to the emotional selection hypothesis.

    Science.gov (United States)

    Coutts, Richard

    2010-10-01

    The emotional selection hypothesis describes a cyclical process that uses dreams to modify and test select mental schemas. An extension is proposed that further characterizes these schemas as facilitators of human need satisfaction. A pilot study was conducted in which this hypothesis was tested by assigning 100 dream reports (10 randomly selected from 10 dream logs at an online web site) to one or more categories within Maslow's hierarchy of needs. A "match" was declared when at least two of three judges agreed both for category and for whether the identified need was satisfied or thwarted in the dream narrative. The interjudge reliability of the judged needs was good (92% of the reports contained at least one match). The number of needs judged as thwarted did not differ significantly from the number judged as satisfied (48 vs. 52%, respectively). The six "higher" needs (belongingness, esteem, cognitive, aesthetic, self-actualization, and transcendence) were scored significantly more frequently (81%) than were the two lowest or "basic" needs (physiological and safety, 19%). Basic needs were also more likely to be judged as thwarted, while higher needs were more likely to be judged as satisfied. These findings are discussed in the context of Maslow's hierarchy of needs as a framework for investigating theories of dream function, including the emotional selection hypothesis and other contemporary dream theories.

  14. INTENSITY OF USE HYPOTHESIS: ANALYSIS OF SELECTED ASIAN COUNTRIES WITH STRUCTURAL DIFFERENCES

    Directory of Open Access Journals (Sweden)

    Ismail Oladimeji Soile

    2013-01-01

    Full Text Available Several efforts have been made to estimate the relationship between intensity of metal use and per capita income at different levels with results supporting the hypothesis that metal consumption per unit of GDP initially increases, peak and later decline with rising income per head. This paper estimates the intensity of copper use curves for three Asian countries with different economic structure to show that the I-U hypothesis significantly underplay the influence of economic structure and other technological innovations by its exclusive emphasis on per capital income. The results are in general conformity with the notion that the intensity of material use (I-U is higher for industrial and very low for service based economies. Though the finding is mixed in the agrarian country considered, the paper suggests the need for further research to corroborate this outcome.

  15. A quantitative test of the size efficiency hypothesis by means of a physiologically structured model

    NARCIS (Netherlands)

    Hülsmann, S.; Rinke, K.; Mooij, W.M.

    2005-01-01

    According to the size-efficiency hypothesis (SEH) larger bodied cladocerans are better competitors for food than small bodied species. In environments with fish, however, the higher losses of the large bodied species due to size-selective predation may shift the balance in favor of the small bodied

  16. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  17. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  18. Individual Influence on Model Selection

    Science.gov (United States)

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  19. Entropic Priors and Bayesian Model Selection

    CERN Document Server

    Brewer, Brendon J

    2009-01-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...

  20. Testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form

    DEFF Research Database (Denmark)

    Péguin-Feissolle, Anne; Strikholm, Birgit; Teräsvirta, Timo

    In this paper we propose a general method for testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form. These tests are based on a Taylor expansion of the nonlinear model around a given point in the sample space. We study the performance of our tests...

  1. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis

    CERN Document Server

    Jacobson, Seth Andrew; Rossi, Alessandro; Scheeres, Daniel J

    2016-01-01

    From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis can be consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. The foundation of this model is the asteroid rotation model of Marzari et al. (2011), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis and the binary evolution model of Jacobson & Scheeres (2011). The asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the...

  2. Sexual selection on land snail shell ornamentation: a hypothesis that may explain shell diversity

    NARCIS (Netherlands)

    Schilthuizen, M.

    2003-01-01

    Background: Many groups of land snails show great interspecific diversity in shell ornamentation, which may include spines on the shell and flanges on the aperture. Such structures have been explained as camouflage or defence, but the possibility that they might be under sexual selection has not

  3. Sexual selection on land snail shell ornamentation: a hypothesis that may explain shell diversity

    NARCIS (Netherlands)

    Schilthuizen, M.

    2003-01-01

    Background: Many groups of land snails show great interspecific diversity in shell ornamentation, which may include spines on the shell and flanges on the aperture. Such structures have been explained as camouflage or defence, but the possibility that they might be under sexual selection has not pre

  4. Modeling Reader and Text Interactions during Narrative Comprehension: A Test of the Lexical Quality Hypothesis

    Science.gov (United States)

    Hamilton, Stephen T.; Freed, Erin M.; Long, Debra L.

    2013-01-01

    The goal of this study was to examine predictions derived from the Lexical Quality Hypothesis regarding relations among word decoding, working-memory capacity, and the ability to integrate new concepts into a developing discourse representation. Hierarchical Linear Modeling was used to quantify the effects of three text properties (length,…

  5. Modeling Reader- and Text- Interactions During Narrative Comprehension: A Test of the Lexical Quality Hypothesis.

    Science.gov (United States)

    Hamilton, Stephen T; Freed, Erin M; Long, Debra L

    2013-01-01

    The goal of this study was to examine predictions derived from the Lexical Quality Hypothesis (Perfetti & Hart, 2002; Perfetti, 2007) regarding relations among word-decoding, working-memory capacity, and the ability to integrate new concepts into a developing discourse representation. Hierarchical Linear Modeling was used to quantify the effects of two text properties (length and number of new concepts) on reading times of focal and spillover sentences, with variance in those effects estimated as a function of individual difference factors (decoding, vocabulary, print exposure, and working-memory capacity). The analysis revealed complex, cross-level interactions that complement the Lexical Quality Hypothesis.

  6. DHEA-S selectively impairs contextual-fear conditioning: support for the antiglucocorticoid hypothesis.

    Science.gov (United States)

    Fleshner, M; Pugh, C R; Tremblay, D; Rudy, J W

    1997-06-01

    The authors had reported that glucocorticoids play a selective role in fear conditioning. The adrenal steroid dehydroepiandrosterone (DHEA) has been reported to act as a functional antiglucocorticoid. If DHEA has antiglucocorticoid properties, then its effects on fear conditioning might resemble those produced by adrenalectomy. The authors now report that chronic exposure to high levels of dehydroepiandrosterone sulfate (DHEA-S; converted in vivo to DHEA) produced the same pattern of results as adrenalectomy. Specifically, treatment with DHEA-S impaired contextual fear conditioning 24 hr after conditioning but not immediately after conditioning, and like adrenalectomy, DHEA-S had no effect on auditory-cue fear conditioning. Preexposure to the context before drug treatment eliminated the amnestic effects of DHEA-S, suggesting that, like adrenalectomy, DHEA-S exerted its effect by interfering with the construction of a contextual memory representation. Thus, DHEA appears to act as a functional antiglucocorticoid in the processes that mediate learning and memory.

  7. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  8. BRO beta-lactamase alleles, antibiotic resistance and a test of the BRO-1 selective replacement hypothesis in Moraxella catarrhalis.

    Science.gov (United States)

    Levy, F; Walker, E S

    2004-02-01

    The hypothesis that BRO-1 selectively replaced the BRO-2 isoform of the Moraxella catarrhalis BRO beta-lactamase was tested by examining the temporal distribution, antibiotic resistance and epidemiological characteristics of isolates from a long-term collection at a single locale. A rapid, one-step PCR assay conducted on 354 isolates spanning 1984-1994 distinguished bro alleles in over 97% of the beta-lactamase-producing isolates. Probes of dot blots were used to distinguish PCR failure from non-beta-lactamase-mediated penicillin resistance. BRO-2 isolates comprised 0-10% of the population per year with no evidence of a decline over time. All beta-lactamase producers exceeded the clinical threshold for penicillin resistance. Bimodality of penicillin MICs for beta-lactamase producers was caused by variation within BRO-1 rather than differences between BRO-1 and BRO-2. Non-beta-lactamase factors also confer resistance to penicillin and may contribute to the BRO-1 bimodality. The 13 BRO-2 isolates were associated with diverse genotypes within which there was evidence of epidemiologically linked clusters. The exclusive association of BRO-2 with four unrelated genotypes suggested maintenance of BRO-2 by recurrent mutation or horizontal exchange. The relative rarity of BRO-2 throughout the study, the absence of a declining temporal trend, and genetic diversity within BRO-2 all failed to support the hypothesis that BRO-2 was more common in the past and has been selectively replaced by BRO-1.

  9. Launch vehicle selection model

    Science.gov (United States)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  10. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    Science.gov (United States)

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics.

  11. Construction of Infinitely Many Models of the Universe on the Riemann Hypothesis

    Science.gov (United States)

    Shukla, Namrata

    2016-10-01

    The aim of this note is to remove an implausible assumption in Moser's theorem [1] to establish our Theorem 1 which gives a lower estimate for the sum p+c2ρ on Riemann hypothesis. Corollary 1 gives a rather plausible construction of infinitely many models of universe with positive density ρ and pressure p since it makes use of the state equation in the form of an inequality.

  12. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  13. Model Selection Principles in Misspecified Models

    CERN Document Server

    Lv, Jinchi

    2010-01-01

    Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...

  14. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  15. Habitat fragmentation, vole population fluctuations, and the ROMPA hypothesis: An experimental test using model landscapes.

    Science.gov (United States)

    Batzli, George O

    2016-11-01

    Increased habitat fragmentation leads to smaller size of habitat patches and to greater distance between patches. The ROMPA hypothesis (ratio of optimal to marginal patch area) uniquely links vole population fluctuations to the composition of the landscape. It states that as ROMPA decreases (fragmentation increases), vole population fluctuations will increase (including the tendency to display multi-annual cycles in abundance) because decreased proportions of optimal habitat result in greater population declines and longer recovery time after a harsh season. To date, only comparative observations in the field have supported the hypothesis. This paper reports the results of the first experimental test. I used prairie voles, Microtus ochrogaster, and mowed grassland to create model landscapes with 3 levels of ROMPA (high with 25% mowed, medium with 50% mowed and low with 75% mowed). As ROMPA decreased, distances between patches of favorable habitat (high cover) increased owing to a greater proportion of unfavorable (mowed) habitat. Results from the first year with intensive live trapping indicated that the preconditions for operation of the hypothesis existed (inversely density dependent emigration and, as ROMPA decreased, increased per capita mortality and decreased per capita movement between optimal patches). Nevertheless, contrary to the prediction of the hypothesis that populations in landscapes with high ROMPA should have the lowest variability, 5 years of trapping indicated that variability was lowest with medium ROMPA. The design of field experiments may never be perfect, but these results indicate that the ROMPA hypothesis needs further rigorous testing. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  16. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  17. Introduction. Modelling natural action selection.

    Science.gov (United States)

    Prescott, Tony J; Bryson, Joanna J; Seth, Anil K

    2007-09-29

    Action selection is the task of resolving conflicts between competing behavioural alternatives. This theme issue is dedicated to advancing our understanding of the behavioural patterns and neural substrates supporting action selection in animals, including humans. The scope of problems investigated includes: (i) whether biological action selection is optimal (and, if so, what is optimized), (ii) the neural substrates for action selection in the vertebrate brain, (iii) the role of perceptual selection in decision-making, and (iv) the interaction of group and individual action selection. A second aim of this issue is to advance methodological practice with respect to modelling natural action section. A wide variety of computational modelling techniques are therefore employed ranging from formal mathematical approaches through to computational neuroscience, connectionism and agent-based modelling. The research described has broad implications for both natural and artificial sciences. One example, highlighted here, is its application to medical science where models of the neural substrates for action selection are contributing to the understanding of brain disorders such as Parkinson's disease, schizophrenia and attention deficit/hyperactivity disorder.

  18. Individual consistency in flight initiation distances in burrowing owls: a new hypothesis on disturbance-induced habitat selection.

    Science.gov (United States)

    Carrete, Martina; Tella, José L

    2010-04-23

    Individuals often consistently differ in personalities and behaviours that allow them to cope with environmental variation. Flight initiation distance (FID) has been measured in a variety of taxa as an estimate of the risk that an individual is willing to take when facing a predator. FID has been used to test life-history trade-offs related to anti-predatory behaviour and for conservation purposes such as to establish buffer zones to minimize human disturbance, given its species-specific consistency. Individual consistency in FID, however, has been largely overlooked. Here we show that, even after controlling for several confounding effects, this behaviour has a strong individual component (repeatability = 0.84-0.92) in a bird species, leaving a small margin for behavioural flexibility. We hypothesize that individuals may distribute themselves among breeding sites depending on their individual susceptibility to human disturbance. This habitat selection hypothesis merits further research, given its implications on both evolutionary and applied ecology research. For example, selection of human-tolerant phenotypes may be promoted through the humanization of habitats occurring worldwide, and when population means instead of individual variability in FID are considered for designing buffer zones to reduce human impacts on wildlife.

  19. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  20. Using environmental niche models to test the 'everything is everywhere' hypothesis for Badhamia.

    Science.gov (United States)

    Aguilar, María; Fiore-Donno, Anna-Maria; Lado, Carlos; Cavalier-Smith, Thomas

    2014-04-01

    It is often discussed whether the biogeography of free-living protists is better explained by the 'everything is everywhere'(EiE) hypothesis, which postulates that only ecology drives their distribution, or by the alternative hypothesis of 'moderate endemicity' in which geographic barriers can limit their dispersal. To formally test this, it would be necessary not only to find organisms restricted to a geographical area but also to check for their presence in any other place with a similar ecology. We propose the use of environmental niche models to generate and test null EiE distributions. Here we have analysed the distribution of 18S rDNA variants (ribotypes) of the myxomycete Badhamia melanospora (belonging to the protozoan phylum Amoebozoa) using 125 specimens from 91 localities. Two geographically structured groups of ribotypes congruent with slight morphological differences in the spores can be distinguished. One group comprises all populations from Argentina and Chile, and the other is formed by populations from North America together with human-introduced populations from other parts of the world. Environmental climatic niche models constructed separately for the two groups have significant differences, but show several overlapping areas. However, only specimens from one group were found in an intensively surveyed area in South America where both niche models overlap. It can be concluded that everything is not everywhere for B. melanospora. This taxon constitutes a complex formed by at least two cryptic species that probably diverged allopatrically in North and South America.

  1. Using environmental niche models to test the ‘everything is everywhere' hypothesis for Badhamia

    Science.gov (United States)

    Aguilar, María; Fiore-Donno, Anna-Maria; Lado, Carlos; Cavalier-Smith, Thomas

    2014-01-01

    It is often discussed whether the biogeography of free-living protists is better explained by the ‘everything is everywhere'(EiE) hypothesis, which postulates that only ecology drives their distribution, or by the alternative hypothesis of ‘moderate endemicity' in which geographic barriers can limit their dispersal. To formally test this, it would be necessary not only to find organisms restricted to a geographical area but also to check for their presence in any other place with a similar ecology. We propose the use of environmental niche models to generate and test null EiE distributions. Here we have analysed the distribution of 18S rDNA variants (ribotypes) of the myxomycete Badhamia melanospora (belonging to the protozoan phylum Amoebozoa) using 125 specimens from 91 localities. Two geographically structured groups of ribotypes congruent with slight morphological differences in the spores can be distinguished. One group comprises all populations from Argentina and Chile, and the other is formed by populations from North America together with human-introduced populations from other parts of the world. Environmental climatic niche models constructed separately for the two groups have significant differences, but show several overlapping areas. However, only specimens from one group were found in an intensively surveyed area in South America where both niche models overlap. It can be concluded that everything is not everywhere for B. melanospora. This taxon constitutes a complex formed by at least two cryptic species that probably diverged allopatrically in North and South America. PMID:24132078

  2. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis

    Science.gov (United States)

    Jacobson, Seth A.; Marzari, Francesco; Rossi, Alessandro; Scheeres, Daniel J.

    2016-10-01

    From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis is consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. These conclusions rest on the asteroid rotation model of Marzari et al. ([2011]Icarus, 214, 622-631), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis, described in detail within, and the binary evolution model of Jacobson et al. ([2011a] Icarus, 214, 161-178) and Jacobson et al. ([2011b] The Astrophysical Journal Letters, 736, L19). Our complete asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the contact binary fraction. We find that in order for the model to best match observations, rotational fission produces high mass ratio (> 0.2) binary components with four to eight times the frequency as low mass ratio (<0.2) components, where the mass ratio is the mass of the secondary component divided by the mass of the primary component. This is consistent with post-rotational fission binary system mass ratio being drawn from either a flat or a positive and shallow distribution, since the high mass ratio bin is four times the size of the low mass ratio bin; this is in contrast to the observed steady-state binary mass ratio, which has a negative and steep distribution. This can be understood in the context of the BYORP-tidal equilibrium hypothesis, which predicts that low mass ratio binaries survive for a significantly

  3. Conditions for the Trivers-Willard Hypothesis to be Valid: a Minimal Population-Genetic Model

    Indian Academy of Sciences (India)

    N. V. Joshi

    2000-04-01

    The very insightful Trivers-Willard hypothesis, proposed in the early 1970s, states that females in good physiological condition are more likely to produce male offspring when the variance of reproductive success among males is high. The hypothesis has inspired a number of studies over the last three decades aimed at its experimental verification, and many of them have found adequate supportive evidence in its favour. Theoretical investigations, on the other hand, have been few, perhaps because formulating a population-genetic model for describing the Trivers±Willard hypothesis turns out to be surprisingly complex. The present study is aimed at using a minimal population-genetic model to explore one specific scenario, namely how is the preference for a male offspring by females in good condition altered when , the proportion of such females in the population, changes from a low to a high value. As expected, when the proportion of such females in good condition is low in the population, i.e. for low values of , the Trivers-Willard (TW) strategy goes to fixation against the equal investment strategy. This holds true up to $g_\\mathrm{max}$, a critical value of , above which the two strategies coexist, but the proportion of the TW strategy steadily decreases as increases to unity. Similarly, when the effect of well-endowed males attaining disproportionately high number of matings is more pronounced, the TW strategy is more likely to go to fixation. Interestingly, the success of the TW strategy has a complex dependence on the variance of the physiological condition of females. If the difference in the two types of conditions is not large, TW strategy is favoured, and its success is more likely as the difference increases. However, beyond a critical value of the difference, the TW strategy is found to be less and less likely to succeed as the difference becomes larger. Possible reasons for these effects are discussed.

  4. Bayesian Evidence and Model Selection

    CERN Document Server

    Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben

    2014-01-01

    In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.

  5. Testing the axial dipole hypothesis for the Moon by modeling the direction of crustal magnetization

    Science.gov (United States)

    Oliveira, J. S.; Wieczorek, M. A.

    2017-02-01

    Orbital magnetic field data show that portions of the Moon's crust are strongly magnetized, and paleomagnetic data of lunar samples suggest that Earth strength magnetic fields could have existed during the first several hundred million years of lunar history. The origin of the fields that magnetized the crust are not understood and could be the result of either a long-lived core-generated dynamo or transient fields associated with large impact events. Core dynamo models usually predict that the field would be predominantly dipolar, with the dipole axis aligned with the rotation axis. We test this hypothesis by modeling the direction of crustal magnetization using a global magnetic field model of the Moon derived from Lunar Prospector and Kaguya magnetometer data. We make use of a model that assumes that the crust is unidirectionally magnetized. The intensity of magnetization can vary with the crust, and the best fitting direction of magnetization is obtained from a nonnegative least squares inversion. From the best fitting magnetization direction we obtain the corresponding north magnetic pole predicted by an internal dipolar field. Some of the obtained paleopoles are associated with the current geographic poles, while other well-constrained anomalies have paleopoles at equatorial latitudes, preferentially at 90° east and west longitudes. One plausible hypothesis for this distribution of paleopoles is that the Moon possessed a long-lived dipolar field but that the dipole was not aligned with the rotation axis as a result of large-scale heat flow heterogeneities at the core-mantle boundary.

  6. Coarse-grained models of stripe forming systems: phase diagrams, anomalies, and scaling hypothesis.

    Science.gov (United States)

    Mendoza-Coto, Alejandro; Stariolo, Daniel A

    2012-11-01

    Two coarse-grained models which capture some universal characteristics of stripe forming systems are studied. At high temperatures, the structure factors of both models attain their maxima on a circle in reciprocal space, as a consequence of generic isotropic competing interactions. Although this is known to lead to some universal properties, we show that the phase diagrams have important differences, which are a consequence of the particular k dependence of the fluctuation spectrum in each model. The phase diagrams are computed in a mean field approximation and also after inclusion of small fluctuations, which are shown to modify drastically the mean field behavior. Observables like the modulation length and magnetization profiles are computed for the whole temperature range accessible to both models and some important differences in behavior are observed. A stripe compression modulus is computed, showing an anomalous behavior with temperature as recently reported in related models. Also, a recently proposed scaling hypothesis for modulated systems is tested and found to be valid for both models studied.

  7. The active learning hypothesis of the job-demand-control model: an experimental examination.

    Science.gov (United States)

    Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas

    2014-01-01

    The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.

  8. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...

  9. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...

  10. Model Selection for Pion Photoproduction

    CERN Document Server

    Landay, J; Fernández-Ramírez, C; Hu, B; Molina, R

    2016-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the $S$-matrix are implemented to different degree in different approaches, but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the Least Absolute Shrinkage and Selection Operator (LASSO) in combination with criteria from information theory and $K$-fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data, then, its feasibility for real data is demonstrated by analyzing the latest available measu...

  11. Entropic criterion for model selection

    Science.gov (United States)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  12. A Selective Review of Group Selection in High Dimensional Models

    CERN Document Server

    Huang, Jian; Ma, Shuangge

    2012-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties, and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study.

  13. The Late Ordovician crisis: the Large Igneous Province hypothesis tested by global carbon cycle modeling.

    Science.gov (United States)

    Lefebvre, Vincent; Servais, Thomas; François, Louis; Averbuch, Olivier

    2010-05-01

    The causes of the well-known Late Ordovician-Hirnantian glaciation remain largely debated. This global cooling event is generally attributed to a severe decrease of atmospheric pCO2 during a time of general greenhouse climate but its duration is not fully determined. The climate perturbation is synchronous with one of the biggest biotic crisis of the Earth history. Some authors have shown that, considering the Ashgillian paleogeography, a drop in pCO2 below a threshold of 8x to 10x PAL (Present Atmospheric Level) may induce a decrease in temperature in high latitudes so that the installation of an ice-sheet on Gondwana could be possible. Such a process requires an intensification of silicate weathering and/or organic carbon burial that are the two major processes potentially driving a decrease in atmospheric pCO2 at the geologic time scale. The Late Ordovician is known to be a period of high mantellic activity marked by a lack of reversal magnetic field and high volcanic activity. Barnes (2004) and Courtillot and Olson (2007) link this process to a superplume event that may give rise to continental basalt flooding. In the present study, we tested this hypothesis with a global carbon cycle numerical box-model coupled with an Energy Balance Climate Model. The Model is an upgrade of that used by Grard et al. (2005) to simulate the environmental impact of the Siberian traps at the P/T boundary. The configuration of the box-model has been set using the Late Ordovician paleogeography. In each oceanic box, the model calculates the evolution of carbon, phosphorus and oxygen concentrations and alkalinity. It also calculates atmospheric pCO2, atmospheric and oceanic δ13C. We tested different scenarios of Large Igneous Province (LIP) emplacements and organic carbon cycle interactions simulating atmospheric pCO2 drops of amplitude large enough to produce the Hirnantian glaciation. We show that the hypothesis of low latitude LIP well accounts for the Late Ordovician climate

  14. Selected soil thermal conductivity models

    Directory of Open Access Journals (Sweden)

    Rerak Monika

    2017-01-01

    Full Text Available The paper presents collected from the literature models of soil thermal conductivity. This is a very important parameter, which allows one to assess how much heat can be transferred from the underground power cables through the soil. The models are presented in table form, thus when the properties of the soil are given, it is possible to select the most accurate method of calculating its thermal conductivity. Precise determination of this parameter results in designing the cable line in such a way that it does not occur the process of cable overheating.

  15. Market disruption, cascading effects, and economic recovery:a life-cycle hypothesis model.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.

    2004-11-01

    This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.

  16. Pharmacophore model of drugs involved in P-glycoprotein multidrug resistance: explanation of structural variety (hypothesis).

    Science.gov (United States)

    Pajeva, Ilza K; Wiese, Michael

    2002-12-19

    A general pharmacophore model of P-glycoprotein (P-gp) drugs is proposed that is based on a highly diverse data set and relates to the verapamil binding site of the protein. It is derived from structurally different drugs using the program GASP. The pharmacophore model consists of two hydrophobic points, three hydrogen bond (HB) acceptor points, and one HB donor point. Pharmacophore patterns of various drugs are obtained, and different binding modes are presumed for some of them. It is concluded that the binding affinity of the drugs depends on the number of the pharmacophore points simultaneously involved in the interaction with P-gp. On the basis of the obtained results, a hypothesis is proposed to explain the broad structural variety of the P-gp substrates and inhibitors: (i) the verapamil binding site of P-gp has several points that can participate in hydrophobic and HB interactions; (ii) different drugs can interact with different receptor points in different binding modes.

  17. Modeling evolution of the mind and cultures: emotional Sapir-Whorf hypothesis

    Science.gov (United States)

    Perlovsky, Leonid I.

    2009-05-01

    Evolution of cultures is ultimately determined by mechanisms of the human mind. The paper discusses the mechanisms of evolution of language from primordial undifferentiated animal cries to contemporary conceptual contents. In parallel with differentiation of conceptual contents, the conceptual contents were differentiated from emotional contents of languages. The paper suggests the neural brain mechanisms involved in these processes. Experimental evidence and theoretical arguments are discussed, including mathematical approaches to cognition and language: modeling fields theory, the knowledge instinct, and the dual model connecting language and cognition. Mathematical results are related to cognitive science, linguistics, and psychology. The paper gives an initial mathematical formulation and mean-field equations for the hierarchical dynamics of both the human mind and culture. In the mind heterarchy operation of the knowledge instinct manifests through mechanisms of differentiation and synthesis. The emotional contents of language are related to language grammar. The conclusion is an emotional version of Sapir-Whorf hypothesis. Cultural advantages of "conceptual" pragmatic cultures, in which emotionality of language is diminished and differentiation overtakes synthesis resulting in fast evolution at the price of self doubts and internal crises are compared to those of traditional cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation. Multi-language, multi-ethnic society might combine the benefits of stability and fast differentiation. Unsolved problems and future theoretical and experimental directions are discussed.

  18. A model selection approach to analysis of variance and covariance.

    Science.gov (United States)

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  19. Elevated Heat Pump hypothesis validation by using satellite data and CMIP5 climate model simulations

    Science.gov (United States)

    Biondi, R.; Cagnazzo, C.; Cairo, F.; Fierli, F.

    2016-12-01

    Air pollution assumes an important role for the health of the south Asian countries population due to the increasing emission of atmospheric pollutants connected to the population growth and industrial development. At the same time the monsoon rainfall trends and patterns have been changed causing serious economic and societal impacts. In this study we have analyzed the link between the aerosols and the monsoon system focusing the attention on a specific mechanism: the Elevated Heat Pump (EHP) hypothesis. According to the EHP the load of dust, organic carbon and black carbon in the pre-monsoon season over the Indo-Gangetic Plain and the foothills of the Himalayas induces enhanced warming in the middle and upper troposphere and changes the convection patterns. As a consequence the rainfall over northern India in late spring and early summer increases and the rainfall in all India in late summer decreases. However, there are still debated conclusions and large uncertainties in this proposed mechanism with ambiguity and uncertainties coming from the lack of real observations and to the consistency of the measurements. By using Historical Natural runs of 3 different Coupled Model Intercomparison Project Phase 5 (CMIP5) models with interactive aerosol loading, we have analysed the variation of precipitation and atmospheric temperature in correspondence to high and low aerosol load years in a time range of 160 years. For deepening the study and validating the model results, we have also included in our analyses the International Satellite Cloud Climatology Project (ISCCP) Deep Convective Tracking Database and the GPS Radio Occultation (RO) measurements. Our preliminary results with models and the two satellite measurements do not show significant evidence of EHP in terms of convection patterns, while the middle and upper troposphere thermal structure is consistent with previous findings.

  20. A stochastic model of gene-culture coevolution suggested by the "culture historical hypothesis" for the evolution of adult lactose absorption in humans.

    Science.gov (United States)

    Aoki, K

    1986-05-01

    A stochastic model of gene-culture coevolution, suggested by the "culture historical hypothesis" of Simoons and McCracken, is presented. According to this hypothesis, adult lactose absorption, believed to be an autosomal dominant trait, attained a high frequency in some human populations due to the positive selection pressure induced by culturally determined milk use in those populations. Two-dimensional Kolmogorov backward equations with appropriate boundary conditions are derived for the ultimate fixation probability of milk users, of the gene for adult lactose absorption, and of both jointly, and for the average time until fixation of the gene. These boundary value problems are solved numerically by the Gauss-Seidel method. I define a theoretical measure of the correlation between gene and culture in terms of the three ultimate fixation probabilities. Monte Carlo simulations are conducted to check and extend the numerical results and also to obtain the first arrival time at gene frequency 0.70, which is approximately the highest observed frequency in any population. Two results that pertain to the culture historical hypothesis are obtained. First, the incomplete correlation observed between adult lactose absorption and milk use does not necessarily constitute evidence against the hypothesis. Second, for the postulated genetic change to have occurred within the 6000-year period since the advent of dairying, either the effective population size was of the order of 100, or, if it was of larger order, the selection coefficient probably had to exceed 5%.

  1. General hypothesis and shell model for the synthesis of semiconductor nanotubes, including carbon nanotubes

    Science.gov (United States)

    Mohammad, S. Noor

    2010-09-01

    Semiconductor nanotubes, including carbon nanotubes, have vast potential for new technology development. The fundamental physics and growth kinetics of these nanotubes are still obscured. Various models developed to elucidate the growth suffer from limited applicability. An in-depth investigation of the fundamentals of nanotube growth has, therefore, been carried out. For this investigation, various features of nanotube growth, and the role of the foreign element catalytic agent (FECA) in this growth, have been considered. Observed growth anomalies have been analyzed. Based on this analysis, a new shell model and a general hypothesis have been proposed for the growth. The essential element of the shell model is the seed generated from segregation during growth. The seed structure has been defined, and the formation of droplet from this seed has been described. A modified definition of the droplet exhibiting adhesive properties has also been presented. Various characteristics of the droplet, required for alignment and organization of atoms into tubular forms, have been discussed. Employing the shell model, plausible scenarios for the formation of carbon nanotubes, and the variation in the characteristics of these carbon nanotubes have been articulated. The experimental evidences, for example, for the formation of shell around a core, dipole characteristics of the seed, and the existence of nanopores in the seed, have been presented. They appear to justify the validity of the proposed model. The diversities of nanotube characteristics, fundamentals underlying the creation of bamboo-shaped carbon nanotubes, and the impurity generation on the surface of carbon nanotubes have been elucidated. The catalytic action of FECA on growth has been quantified. The applicability of the proposed model to the nanotube growth by a variety of mechanisms has been elaborated. These mechanisms include the vapor-liquid-solid mechanism, the oxide-assisted growth mechanism, the self

  2. Model selection for pion photoproduction

    Science.gov (United States)

    Landay, J.; Döring, M.; Fernández-Ramírez, C.; Hu, B.; Molina, R.

    2017-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the S matrix are implemented to a different degree in different approaches; but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the least absolute shrinkage and selection operator (LASSO) in combination with criteria from information theory and K -fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data; then, its feasibility for real data is demonstrated by analyzing the latest available measurements of differential cross sections (d σ /d Ω ), photon-beam asymmetries (Σ ), and target asymmetry differential cross sections (d σT/d ≡T d σ /d Ω ) in the low-energy regime.

  3. Application of Multilevel Models to Morphometric Data. Part 1. Linear Models and Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    O. Tsybrovskyy

    2003-01-01

    Full Text Available Morphometric data usually have a hierarchical structure (i.e., cells are nested within patients, which should be taken into consideration in the analysis. In the recent years, special methods of handling hierarchical data, called multilevel models (MM, as well as corresponding software have received considerable development. However, there has been no application of these methods to morphometric data yet. In this paper we report our first experience of analyzing karyometric data by means of MLwiN – a dedicated program for multilevel modeling. Our data were obtained from 34 follicular adenomas and 44 follicular carcinomas of the thyroid. We show examples of fitting and interpreting MM of different complexity, and draw a number of interesting conclusions about the differences in nuclear morphology between follicular thyroid adenomas and carcinomas. We also demonstrate substantial advantages of multilevel models over conventional, single‐level statistics, which have been adopted previously to analyze karyometric data. In addition, some theoretical issues related to MM as well as major statistical software for MM are briefly reviewed.

  4. Association study of 167 candidate genes for schizophrenia selected by a multi-domain evidence-based prioritization algorithm and neurodevelopmental hypothesis.

    Science.gov (United States)

    Zhao, Zhongming; Webb, Bradley T; Jia, Peilin; Bigdeli, T Bernard; Maher, Brion S; van den Oord, Edwin; Bergen, Sarah E; Amdur, Richard L; O'Neill, Francis A; Walsh, Dermot; Thiselton, Dawn L; Chen, Xiangning; Pato, Carlos N; Riley, Brien P; Kendler, Kenneth S; Fanous, Ayman H

    2013-01-01

    Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n=3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is warranted.

  5. Association Study of 167 Candidate Genes for Schizophrenia Selected by a Multi-Domain Evidence-Based Prioritization Algorithm and Neurodevelopmental Hypothesis

    Science.gov (United States)

    Jia, Peilin; Bigdeli, T. Bernard; Maher, Brion S.; van den Oord, Edwin; Bergen, Sarah E.; Amdur, Richard L.; O'Neill, Francis A.; Walsh, Dermot; Thiselton, Dawn L.; Chen, Xiangning; Pato, Carlos N.; Riley, Brien P.; Kendler, Kenneth S.; Fanous, Ayman H.

    2013-01-01

    Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n = 3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is

  6. Comparing a Real-Life WSN Platform Small Network and its OPNET Modeler Model using Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    Gilbert E. Pérez

    2014-12-01

    Full Text Available To avoid the high cost and arduous effort usually associated with field analysis of Wireless Sensor Network (WSN, Modeling and Simulation (M&S is used to predict the behavior and performance of the network. However, the simulation models utilized to imitate real life networks are often used for general purpose. Therefore, they are less likely to provide accurate predictions for different real life networks. In this paper, a comparison methodology based on hypothesis testing is proposed to evaluate and compare simulation output versus real-life network measurements. Performance related parameters such as traffic generation rates and goodput rates for a small WSN are considered. To execute the comparison methodology, a "Comparison Tool", composed of MATLAB scripts is developed and used. The comparison tool demonstrates the need for model verification and the analysis of good agreements between the simulation and empirical measurements.

  7. Bayesian model evidence for order selection and correlation testing.

    Science.gov (United States)

    Johnston, Leigh A; Mareels, Iven M Y; Egan, Gary F

    2011-01-01

    Model selection is a critical component of data analysis procedures, and is particularly difficult for small numbers of observations such as is typical of functional MRI datasets. In this paper we derive two Bayesian evidence-based model selection procedures that exploit the existence of an analytic form for the linear Gaussian model class. Firstly, an evidence information criterion is proposed as a model order selection procedure for auto-regressive models, outperforming the commonly employed Akaike and Bayesian information criteria in simulated data. Secondly, an evidence-based method for testing change in linear correlation between datasets is proposed, which is demonstrated to outperform both the traditional statistical test of the null hypothesis of no correlation change and the likelihood ratio test.

  8. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... to examine the null hypothesis that codon usage is due to mutation bias alone, not influenced by natural selection. Application of the test to the mammalian data led to rejection of the null hypothesis in most genes, suggesting that natural selection may be a driving force in the evolution of synonymous...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed....

  9. Confirmation of the "protein-traffic-hypothesis" and the "protein-localization-hypothesis" using the diabetes-mellitus-type-1-knock-in and transgenic-murine-models and the trepitope sequences.

    Science.gov (United States)

    Arneth, Borros

    2012-10-01

    As possible mechanisms to explain the emergence of autoimmune diseases, the current author has suggested in earlier papers two new pathways: the "protein localization hypothesis" and the "protein traffic hypothesis". The "protein localization hypothesis" states that an autoimmune disease develops if a protein accumulates in a previously unoccupied compartment, that did not previously contain that protein. Similarly, the "protein traffic hypothesis" states that a sudden error within the transport of a certain protein leads to the emergence of an autoimmune disease. The current article discusses the usefulness of the different commercially available transgenic murine models of diabetes mellitus type 1 to confirm the aforementioned hypotheses. This discussion shows that several transgenic murine models of diabetes mellitus type 1 are in-line and confirm the aforementioned hypotheses. Furthermore, these hypotheses are additionally inline with the occurrence of several newly discovered protein sequences, the so-called trepitope sequences. These sequences modulate the immune response to certain proteins. The current study analyzed to what extent the hypotheses are supported by the occurrence of these new sequences. Thereby the occurrence of the trepitope sequences provides additional evidence supporting the aforementioned hypotheses. Both the "protein localization hypothesis" and the "protein traffic hypothesis" have the potential to lead to new causal therapy concepts. The "protein localization hypothesis" and the "protein traffic hypothesis" provide conceptional explanations for the diabetes mouse models as well as for the newly discovered trepitope sequences. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Selective Maintenance Model Considering Time Uncertainty

    OpenAIRE

    Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv

    2012-01-01

    This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...

  11. A Range-Null Hypothesis Approach for Testing DIF under the Rasch Model

    Science.gov (United States)

    Wells, Craig S.; Cohen, Allan S.; Patton, Jeffrey

    2009-01-01

    A primary concern with testing differential item functioning (DIF) using a traditional point-null hypothesis is that a statistically significant result does not imply that the magnitude of DIF is of practical interest. Similarly, for a given sample size, a non-significant result does not allow the researcher to conclude the item is free of DIF. To…

  12. A Hypothesis and Review of the Relationship between Selection for Improved Production Efficiency, Coping Behavior, and Domestication

    Directory of Open Access Journals (Sweden)

    Wendy M. Rauw

    2017-09-01

    Full Text Available Coping styles in response to stressors have been described both in humans and in other animal species. Because coping styles are directly related to individual fitness they are part of the life history strategy. Behavioral styles trade off with other life-history traits through the acquisition and allocation of resources. Domestication and subsequent artificial selection for production traits specifically focused on selection of individuals with energy sparing mechanisms for non-production traits. Domestication resulted in animals with low levels of aggression and activity, and a low hypothalamic–pituitary–adrenal (HPA axis reactivity. In the present work, we propose that, vice versa, selection for improved production efficiency may to some extent continue to favor docile domesticated phenotypes. It is hypothesized that both domestication and selection for improved production efficiency may result in the selection of reactive style animals. Both domesticated and reactive style animals are characterized by low levels of aggression and activity, and increased serotonin neurotransmitter levels. However, whereas domestication quite consistently results in a decrease in the functional state of the HPA axis, the reactive coping style is often found to be dominated by a high HPA response. This may suggest that fearfulness and coping behavior are two independent underlying dimensions to the coping response. Although it is generally proposed that animal welfare improves with selection for calmer animals that are less fearful and reactive to novelty, animals bred to be less sensitive with fewer desires may be undesirable from an ethical point of view.

  13. The Role of Selection Effects in the Contact Hypothesis: Results from a U.S. National Survey on Sexual Prejudice.

    Science.gov (United States)

    Loehr, Annalise; Doan, Long; Miller, Lisa R

    2015-11-01

    Empirical research has documented that contact with lesbians and gays is associated with more positive feelings toward and greater support for legal rights for them, but we know less about whether these effects extend to informal aspects of same-sex relationships, such as reactions to public displays of affection. Furthermore, many studies have assumed that contact influences levels of sexual prejudice; however, the possibility of selection effects, in which less sexually prejudiced people have contact, and more sexually prejudiced people do not, raises some doubts about this assumption. We used original data from a nationally representative sample of heterosexuals to determine whether those reporting contact with a lesbian, gay, bisexual, or transgender friend or relative exhibited less sexual prejudice toward lesbian and gay couples than those without contact. This study examined the effect of contact on attitudes toward formal rights and a relatively unexplored dimension, informal privileges. We estimated the effect of having contact using traditional (ordinary least squares regression) methods before accounting for selection effects using propensity score matching. After accounting for selection effects, we found no significant differences between the attitudes of those who had contact and those who did not, for either formal or informal measures. Thus, selection effects appeared to play a pivotal role in confounding the link between contact and sexual prejudice, and future studies should exercise caution in interpreting results that do not account for such selection effects.

  14. Bayesian Constrained-Model Selection for Factor Analytic Modeling

    OpenAIRE

    Peeters, Carel F.W.

    2016-01-01

    My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...

  15. Model selection bias and Freedman's paradox

    Science.gov (United States)

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  16. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  17. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H. [Instituto de Astrofisica de Canarias, E-38205, La Laguna, Tenerife (Spain); Viticchie, B. [ESA/ESTEC RSSD, Keplerlaan 1, 2200 AG Noordwijk (Netherlands); Orozco Suarez, D., E-mail: aasensio@iac.es [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan)

    2012-04-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  18. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built...

  19. Modelling autophagy selectivity by receptor clustering on peroxisomes

    CERN Document Server

    Brown, Aidan I

    2016-01-01

    When subcellular organelles are degraded by autophagy, typically some, but not all, of each targeted organelle type are degraded. Autophagy selectivity must not only select the correct type of organelle, but must discriminate between individual organelles of the same kind. In the context of peroxisomes, we use computational models to explore the hypothesis that physical clustering of autophagy receptor proteins on the surface of each organelle provides an appropriate all-or-none signal for degradation. The pexophagy receptor proteins NBR1 and p62 are well characterized, though only NBR1 is essential for pexophagy (Deosaran {\\em et al.}, 2013). Extending earlier work by addressing the initial nucleation of NBR1 clusters on individual peroxisomes, we find that larger peroxisomes nucleate NBR1 clusters first and lose them due to competitive coarsening last, resulting in significant size-selectivity favouring large peroxisomes. This effect can explain the increased catalase signal that results from experimental s...

  20. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...

  1. Model selection for amplitude analysis

    CERN Document Server

    Guegan, Baptiste; Stevens, Justin; Williams, Mike

    2015-01-01

    Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.

  2. Disruption of the LTD dialogue between the cerebellum and the cortex in Angelman syndrome model: a timing hypothesis

    Directory of Open Access Journals (Sweden)

    Guy eCheron

    2014-11-01

    Full Text Available Angelman syndrome is a genetic neurodevelopmental disorder in which cerebellar functioning impairment has been documented despite the absence of gross structural abnormalities. Characteristically, a spontaneous 160 Hz oscillation emerges in the Purkinje cells network of the Ube3am-/p+ Angelman mouse model. This abnormal oscillation is induced by enhanced Purkinje cell rhythmicity and hypersynchrony along the parallel fiber beam. We present a pathophysiological hypothesis for the neurophysiology underlying major aspects of the clinical phenotype of Angelman syndrome, including cognitive, language and motor deficits, involving long-range connection between the cerebellar and the cortical networks. This hypothesis states that the alteration of the cerebellar rhythmic activity impinges cerebellar long-term depression (LTD plasticity, which in turn alters the LTD plasticity in the cerebral cortex. This hypothesis was based on preliminary experiments using electrical stimulation of the whiskers pad performed in alert mice showing that after a 8 Hz LTD-inducing protocol, the cerebellar LTD accompanied by a delayed response in the wild type mice is missing in Ube3am-/p+ mice and that the LTD induced in the barrel cortex following the same peripheral stimulation in wild mice is reversed into a LTP in the Ube3am-/p+ mice. The control exerted by the cerebellum on the excitation vs inhibition balance in the cerebral cortex and possible role played by the timing plasticity of the Purkinje cell LTD on the spike–timing dependent plasticity (STDP of the pyramidal neurons are discussed in the context of the present hypothesis.

  3. Disruption of the LTD dialogue between the cerebellum and the cortex in Angelman syndrome model: a timing hypothesis.

    Science.gov (United States)

    Cheron, Guy; Márquez-Ruiz, Javier; Kishino, Tatsuya; Dan, Bernard

    2014-01-01

    Angelman syndrome (AS) is a genetic neurodevelopmental disorder in which cerebellar functioning impairment has been documented despite the absence of gross structural abnormalities. Characteristically, a spontaneous 160 Hz oscillation emerges in the Purkinje cells network of the Ube3a (m-/p+) Angelman mouse model. This abnormal oscillation is induced by enhanced Purkinje cell rhythmicity and hypersynchrony along the parallel fiber beam. We present a pathophysiological hypothesis for the neurophysiology underlying major aspects of the clinical phenotype of AS, including cognitive, language and motor deficits, involving long-range connection between the cerebellar and the cortical networks. This hypothesis states that the alteration of the cerebellar rhythmic activity impinges cerebellar long-term depression (LTD) plasticity, which in turn alters the LTD plasticity in the cerebral cortex. This hypothesis was based on preliminary experiments using electrical stimulation of the whiskers pad performed in alert mice showing that after a 8 Hz LTD-inducing protocol, the cerebellar LTD accompanied by a delayed response in the wild type (WT) mice is missing in Ube3a (m-/p+) mice and that the LTD induced in the barrel cortex following the same peripheral stimulation in wild mice is reversed into a LTP in the Ube3a (m-/p+) mice. The control exerted by the cerebellum on the excitation vs. inhibition balance in the cerebral cortex and possible role played by the timing plasticity of the Purkinje cell LTD on the spike-timing dependent plasticity (STDP) of the pyramidal neurons are discussed in the context of the present hypothesis.

  4. Coalescent Simulation and Paleodistribution Modeling for Tabebuia rosealba Do Not Support South American Dry Forest Refugia Hypothesis

    Science.gov (United States)

    de Melo, Warita Alves; Lima-Ribeiro, Matheus S.; Terribile, Levi Carina

    2016-01-01

    Studies based on contemporary plant occurrences and pollen fossil records have proposed that the current disjunct distribution of seasonally dry tropical forests (SDTFs) across South America is the result of fragmentation of a formerly widespread and continuously distributed dry forest during the arid climatic conditions associated with the Last Glacial Maximum (LGM), which is known as the modern-day dry forest refugia hypothesis. We studied the demographic history of Tabebuia rosealba (Bignoniaceae) to understand the disjunct geographic distribution of South American SDTFs based on statistical phylogeography and ecological niche modeling (ENM). We specifically tested the dry forest refugia hypothesis; i.e., if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the LGM. We sampled 235 individuals across 18 populations in Central Brazil and analyzed the polymorphisms at chloroplast (trnS-trnG, psbA-trnH and ycf6-trnC intergenic spacers) and nuclear (ITS nrDNA) genomes. We performed coalescence simulations of alternative hypotheses under demographic expectations from two a priori biogeographic hypotheses (1. the Pleistocene Arc hypothesis and, 2. a range shift to Amazon Basin) and other two demographic expectances predicted by ENMs (3. expansion throughout the Neotropical South America, including Amazon Basin, and 4. retraction during the LGM). Phylogenetic analyses based on median-joining network showed haplotype sharing among populations with evidence of incomplete lineage sorting. Coalescent analyses showed smaller effective population sizes for T. roseoalba during the LGM compared to the present-day. Simulations and ENM also showed that its current spatial pattern of genetic diversity is most likely due to a scenario of range retraction during the LGM instead of the fragmentation from a once extensive and largely contiguous SDTF across South America, not supporting the South

  5. Coalescent Simulation and Paleodistribution Modeling for Tabebuia rosealba Do Not Support South American Dry Forest Refugia Hypothesis.

    Directory of Open Access Journals (Sweden)

    Warita Alves de Melo

    Full Text Available Studies based on contemporary plant occurrences and pollen fossil records have proposed that the current disjunct distribution of seasonally dry tropical forests (SDTFs across South America is the result of fragmentation of a formerly widespread and continuously distributed dry forest during the arid climatic conditions associated with the Last Glacial Maximum (LGM, which is known as the modern-day dry forest refugia hypothesis. We studied the demographic history of Tabebuia rosealba (Bignoniaceae to understand the disjunct geographic distribution of South American SDTFs based on statistical phylogeography and ecological niche modeling (ENM. We specifically tested the dry forest refugia hypothesis; i.e., if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the LGM. We sampled 235 individuals across 18 populations in Central Brazil and analyzed the polymorphisms at chloroplast (trnS-trnG, psbA-trnH and ycf6-trnC intergenic spacers and nuclear (ITS nrDNA genomes. We performed coalescence simulations of alternative hypotheses under demographic expectations from two a priori biogeographic hypotheses (1. the Pleistocene Arc hypothesis and, 2. a range shift to Amazon Basin and other two demographic expectances predicted by ENMs (3. expansion throughout the Neotropical South America, including Amazon Basin, and 4. retraction during the LGM. Phylogenetic analyses based on median-joining network showed haplotype sharing among populations with evidence of incomplete lineage sorting. Coalescent analyses showed smaller effective population sizes for T. roseoalba during the LGM compared to the present-day. Simulations and ENM also showed that its current spatial pattern of genetic diversity is most likely due to a scenario of range retraction during the LGM instead of the fragmentation from a once extensive and largely contiguous SDTF across South America, not supporting the

  6. The Ouroboros Model, selected facets.

    Science.gov (United States)

    Thomsen, Knud

    2011-01-01

    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.

  7. Constant Latent Odds-Ratios Models and the Mantel-Haenszel Null Hypothesis

    Science.gov (United States)

    Hessen, David J.

    2005-01-01

    In the present paper, a new family of item response theory (IRT) models for dichotomous item scores is proposed. Two basic assumptions define the most general model of this family. The first assumption is local independence of the item scores given a unidimensional latent trait. The second assumption is that the odds-ratios for all item-pairs are…

  8. A neutral model as a null hypothesis test for river network sinuosity

    Science.gov (United States)

    Gaucherel, C.; Salomon, L.

    2014-06-01

    Neutral models (NMs) are built to test null hypotheses and to detect properties at work in an object or a system. While several studies in geomorphology have used NMs without explicitly mentioning them or describing how they were built, it must be recognized that neutral models more often concerned theoretical explorations that drove such use. In this paper, we propose a panel of NMs of river (channel) networks based on a well-established relationship between observed and simulated sinuosity properties. We first simulated new instances of river networks with a (one-parameter) neutral model based on optimal channel networks (OCN) and leading to homogeneous sinuosity watersheds. We then proposed a "less neutral" model able to generate a variety of river networks accounting for the spatial heterogeneity of observed properties such as elevation. These models, providing confidence levels, allowed us to certify that some properties played a role in the generation of the observed network. Finally, we demonstrated and illustrated both models on the Bidasoa watershed (Spain-France frontier), with a new dedicated software (called SSM). NMs in geomorphology ensure to progressively help to identify the process operating in an observed object, and to ultimately improve our understanding of it (i.e. intrinsic need). But they also provide simulated samples statistically "similar" to an observed one, thus offering new alternatives to every process carried by the observed object (i.e. extrinsic need). Artificial river networks studied here would be of great value to environmental sciences studying geomorphology and freshwater-related processes.

  9. Random Effect and Latent Variable Model Selection

    CERN Document Server

    Dunson, David B

    2008-01-01

    Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models

  10. Statistical resolution limit for the multidimensional harmonic retrieval model: hypothesis test and Cramér-Rao Bound approaches

    Directory of Open Access Journals (Sweden)

    El Korso Mohammed

    2011-01-01

    Full Text Available Abstract The statistical resolution limit (SRL, which is defined as the minimal separation between parameters to allow a correct resolvability, is an important statistical tool to quantify the ultimate performance for parametric estimation problems. In this article, we generalize the concept of the SRL to the multidimensional SRL (MSRL applied to the multidimensional harmonic retrieval model. In this article, we derive the SRL for the so-called multidimensional harmonic retrieval model using a generalization of the previously introduced SRL concepts that we call multidimensional SRL (MSRL. We first derive the MSRL using an hypothesis test approach. This statistical test is shown to be asymptotically an uniformly most powerful test which is the strongest optimality statement that one could expect to obtain. Second, we link the proposed asymptotic MSRL based on the hypothesis test approach to a new extension of the SRL based on the Cramér-Rao Bound approach. Thus, a closed-form expression of the asymptotic MSRL is given and analyzed in the framework of the multidimensional harmonic retrieval model. Particularly, it is proved that the optimal MSRL is obtained for equi-powered sources and/or an equi-distributed number of sensors on each multi-way array.

  11. Probabilistic biomechanical finite element simulations: whole-model classical hypothesis testing based on upcrossing geometry

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2016-11-01

    Full Text Available Statistical analyses of biomechanical finite element (FE simulations are frequently conducted on scalar metrics extracted from anatomically homologous regions, like maximum von Mises stresses from demarcated bone areas. The advantages of this approach are numerical tabulability and statistical simplicity, but disadvantages include region demarcation subjectivity, spatial resolution reduction, and results interpretation complexity when attempting to mentally map tabulated results to original anatomy. This study proposes a method which abandons the two aforementioned advantages to overcome these three limitations. The method is inspired by parametric random field theory (RFT, but instead uses a non-parametric analogue to RFT which permits flexible model-wide statistical analyses through non-parametrically constructed probability densities regarding volumetric upcrossing geometry. We illustrate method fundamentals using basic 1D and 2D models, then use a public model of hip cartilage compression to highlight how the concepts can extend to practical biomechanical modeling. The ultimate whole-volume results are easy to interpret, and for constant model geometry the method is simple to implement. Moreover, our analyses demonstrate that the method can yield biomechanical insights which are difficult to infer from single simulations or tabulated multi-simulation results. Generalizability to non-constant geometry including subject-specific anatomy is discussed.

  12. Exact Hypothesis Tests for Log-linear Models with exactLoglinTest

    Directory of Open Access Journals (Sweden)

    Brian Caffo

    2006-11-01

    Full Text Available This manuscript overviews exact testing of goodness of fit for log-linear models using the R package exactLoglinTest. This package evaluates model fit for Poisson log-linear models by conditioning on minimal sufficient statistics to remove nuisance parameters. A Monte Carlo algorithm is proposed to estimate P values from the resulting conditional distribution. In particular, this package implements a sequentially rounded normal approximation and importance sampling to approximate probabilities from the conditional distribution. Usually, this results in a high percentage of valid samples. However, in instances where this is not the case, a Metropolis Hastings algorithm can be implemented that makes more localized jumps within the reference set. The manuscript details how some conditional tests for binomial logit models can also be viewed as conditional Poisson log-linear models and hence can be performed via exactLoglinTest. A diverse battery of examples is considered to highlight use, features and extensions of the software. Notably, potential extensions to evaluating disclosure risk are also considered.

  13. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  14. An Emerging Role for Numerical Modelling in Wildfire Behavior Research: Explorations, Explanations, and Hypothesis Development

    Science.gov (United States)

    Linn, R.; Winterkamp, J.; Canfield, J.; Sauer, J.; Dupuy, J. L.; Finney, M.; Hoffman, C.; Parsons, R.; Pimont, F.; Sieg, C.; Forthofer, J.

    2014-12-01

    The human capacity for altering the water cycle has been well documented and given the expected change due to population, income growth, biofuels, climate, and associated land use change, there remains great uncertainty in both the degree of increased pressure on land and water resources and in our ability to adapt to these changes. Alleviating regional shortages in water supply can be carried out in a spatial hierarchy through i) direct trade of water between all regions, ii) development of infrastructure to improve water availability within regions (e.g. impounding rivers), iii) via inter-basin hydrological transfer between neighboring regions and, iv) via virtual water trade. These adaptation strategies can be managed via market trade in water and commodities to identify those strategies most likely to be adopted. This work combines the physically-based University of New Hampshire Water Balance Model (WBM) with the macro-scale Purdue University Simplified International Model of agricultural Prices Land use and the Environment (SIMPLE) to explore the interaction of supply and demand for fresh water globally. In this work we use a newly developed grid cell-based version of SIMPLE to achieve a more direct connection between the two modeling paradigms of physically-based models with optimization-driven approaches characteristic of economic models. We explore questions related to the global and regional impact of water scarcity and water surplus on the ability of regions to adapt to future change. Allowing for a variety of adaptation strategies such as direct trade of water and expanding the built water infrastructure, as well as indirect trade in commodities, will reduce overall global water stress and, in some regions, significantly reduce their vulnerability to these future changes.

  15. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA desig

  16. An introduction to Bayesian hypothesis testing for management research

    NARCIS (Netherlands)

    Andraszewicz, S.; Scheibehenne, B.; Rieskamp, J.; Grasman, R.; Verhagen, J.; Wagenmakers, E.-J.

    2015-01-01

    In management research, empirical data are often analyzed using p-value null hypothesis significance testing (pNHST). Here we outline the conceptual and practical advantages of an alternative analysis method: Bayesian hypothesis testing and model selection using the Bayes factor. In contrast to

  17. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  18. The sepsis model: an emerging hypothesis for the lethality of inhalation anthrax.

    Science.gov (United States)

    Coggeshall, Kenneth Mark; Lupu, Florea; Ballard, Jimmy; Metcalf, Jordan P; James, Judith A; Farris, Darise; Kurosawa, Shinichiro

    2013-07-01

    Inhalation anthrax is often described as a toxin-mediated disease. However, the toxaemia model does not account for the high mortality of inhalation anthrax relative to other forms of the disease or for the pathology present in inhalation anthrax. Patients with inhalation anthrax consistently show extreme bacteraemia and, in contrast to animals challenged with toxin, signs of sepsis. Rather than toxaemia, we propose that death in inhalation anthrax results from an overwhelming bacteraemia that leads to severe sepsis. According to our model, the central role of anthrax toxin is to permit the vegetative bacteria to escape immune detection. Other forms of B. anthracis infection have lower mortality because their overt symptoms early in the course of disease cause patients to seek medical care at a time when the infection and its sequelae can still be reversed by antibiotics. Thus, the sepsis model explains key features of inhalation anthrax and may offer a more complete understanding of disease pathology for researchers as well as those involved in the care of patients.

  19. The sepsis model: an emerging hypothesis for the lethality of inhalation anthrax

    Science.gov (United States)

    Coggeshall, Kenneth Mark; Lupu, Florea; Ballard, Jimmy; Metcalf, Jordan P; James, Judith A; Farris, Darise; Kurosawa, Shinichiro

    2013-01-01

    Inhalation anthrax is often described as a toxin-mediated disease. However, the toxaemia model does not account for the high mortality of inhalation anthrax relative to other forms of the disease or for the pathology present in inhalation anthrax. Patients with inhalation anthrax consistently show extreme bacteraemia and, in contrast to animals challenged with toxin, signs of sepsis. Rather than toxaemia, we propose that death in inhalation anthrax results from an overwhelming bacteraemia that leads to severe sepsis. According to our model, the central role of anthrax toxin is to permit the vegetative bacteria to escape immune detection. Other forms of B. anthracis infection have lower mortality because their overt symptoms early in the course of disease cause patients to seek medical care at a time when the infection and its sequelae can still be reversed by antibiotics. Thus, the sepsis model explains key features of inhalation anthrax and may offer a more complete understanding of disease pathology for researchers as well as those involved in the care of patients. PMID:23742651

  20. An experimental and modelling exploration of the host-sanction hypothesis in legume-rhizobia mutualism.

    Science.gov (United States)

    Marco, Diana E; Carbajal, Juan P; Cannas, Sergio; Pérez-Arnedo, Rebeca; Hidalgo-Perea, Angeles; Olivares, José; Ruiz-Sainz, José E; Sanjuán, Juan

    2009-08-07

    Despite the importance of mutualism as a key ecological process, its persistence in nature is difficult to explain since the existence of exploitative, "cheating" partners that could erode the interaction is common. By analogy with the proposed policing strategy stabilizing intraspecific cooperation, host sanctions against non-N(2) fixing, cheating symbionts have been proposed as a force stabilizing mutualism in legume-Rhizobium symbiosis. Following this proposal, penalizations would include decreased nodular rhizobial viability and/or early nodule senescence in nodules occupied by cheating rhizobia. In this work, we analyse the stability of Rhizobium-legume symbiosis when non-fixing, cheating strains are present, using an experimental and modelling approach. We used split-root experiments with soybean plants inoculated with two rhizobial strains, a cooperative, normal N(2) fixing strain and an isogenic non-fixing, "perfect" cheating mutant derivative that lacks nitrogenase activity but has the same nodulation abilities inoculated to split-root plants. We found no experimental evidence of functioning plant host sanctions to cheater rhizobia based on nodular rhizobia viability and nodule senescence and maturity molecular markers. Based on these experiments, we developed a population dynamic model with and without the inclusion of plant host sanctions. We show that plant populations persist in spite of the presence of cheating rhizobia without the need of incorporating any sanction against the cheater populations in the model, under the realistic assumption that plants can at least get some amount of fixed N(2) from the effectively mutualistic rhizobia occupying some nodules. Inclusion of plant sanctions leads to the unrealistic effect of ultimate extinction of cheater strains in soil. Our simulation results are in agreement with increasing experimental evidence and theoretical work showing that mutualisms can persist in presence of cheating partners.

  1. Early animal models of rickets and proof of a nutritional deficiency hypothesis.

    Science.gov (United States)

    Chesney, Russell W

    2012-03-01

    In the period between 1880 and 1930, the role of nutrition and nutritional deficiency as a cause of rickets was established based upon the results from 6 animal models of rickets. This greatly prevalent condition (60%-90% in some locales) in children of the industrialized world was an important clinical research topic. What had to be reconciled was that rickets was associated with infections, crowding, and living in northern latitudes, and cod liver oil was observed to prevent or cure the disease. Several brilliant insights opened up a new pathway to discovery using animal models of rickets. Studies in lion cubs, dogs, and rats showed the importance of cod liver oil and an antirachitic substance later termed vitamin D. They showed that fats in the diet were required, that vitamin D had a secosteroid structure and was different from vitamin A, and that ultraviolet irradiation could prevent or cure rickets. Several of these experiments had elements of serendipity in that certain dietary components and the presence or absence of sunshine or ultraviolet irradiation could critically change the course of rickets. Nonetheless, at the end of these studies, a nutritional deficiency of vitamin D resulting from a poor diet or lack of adequate sunshine was firmly established as a cause of rickets.

  2. The Metadistrict as the Territorial Strategy: From Set Theory and a Matrix Organization Model Hypothesis

    Directory of Open Access Journals (Sweden)

    Francesco Contò

    2012-06-01

    Full Text Available The purpose of this proposal is to explore a new concept of 'Metadistrict' to be applied in a region of Southern Italy – Apulia ‐ in order to analyze the impact that the activation of a special network between different sector chains and several integrated projects may have for revitalizing the local economy; an important role is assigned to the network of relationships and so to the social capital. The Metadistrict model stems from the Local Action Groups and the Integrated Projects of Food Chain frameworks. It may represent a crucial driver of the rural economy through the realization of sector circuits connected to the concept of multi‐functionality in agriculture, that is Network of the Territorial Multi‐functionality. It was formalized by making use of a set of theories and of a Matrix Organization Model. The adoption of the Metadistrict perspective as the territorial strategy may play a key role to revitalize the primary sector, through the increase of economic and productive opportunities due to the implementation of a common and shared strategy and organization.

  3. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  4. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  5. Modeling the minimal newborn's intersubjective mind: the visuotopic-somatotopic alignment hypothesis in the superior colliculus.

    Directory of Open Access Journals (Sweden)

    Alexandre Pitti

    Full Text Available The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of sensory alignment observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered and the deep somatopic layer (face-centered in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.

  6. Quorum-Sensing in CD4(+) T Cell Homeostasis: A Hypothesis and a Model.

    Science.gov (United States)

    Almeida, Afonso R M; Amado, Inês F; Reynolds, Joseph; Berges, Julien; Lythe, Grant; Molina-París, Carmen; Freitas, Antonio A

    2012-01-01

    Homeostasis of lymphocyte numbers is believed to be due to competition between cellular populations for a common niche of restricted size, defined by the combination of interactions and trophic factors required for cell survival. Here we propose a new mechanism: homeostasis of lymphocyte numbers could also be achieved by the ability of lymphocytes to perceive the density of their own populations. Such a mechanism would be reminiscent of the primordial quorum-sensing systems used by bacteria, in which some bacteria sense the accumulation of bacterial metabolites secreted by other elements of the population, allowing them to "count" the number of cells present and adapt their growth accordingly. We propose that homeostasis of CD4(+) T cell numbers may occur via a quorum-sensing-like mechanism, where IL-2 is produced by activated CD4(+) T cells and sensed by a population of CD4(+) Treg cells that expresses the high-affinity IL-2Rα-chain and can regulate the number of activated IL-2-producing CD4(+) T cells and the total CD4(+) T cell population. In other words, CD4(+) T cell populations can restrain their growth by monitoring the number of activated cells, thus preventing uncontrolled lymphocyte proliferation during immune responses. We hypothesize that malfunction of this quorum-sensing mechanism may lead to uncontrolled T cell activation and autoimmunity. Finally, we present a mathematical model that describes the key role of IL-2 and quorum-sensing mechanisms in CD4(+) T cell homeostasis during an immune response.

  7. Testing the Early Mars H2-CO2 Greenhouse Hypothesis with a 1-D Photochemical Model

    CERN Document Server

    Batalha, Natasha; Ramirez, Ramses; Kasting, James

    2015-01-01

    A recent study by Ramirez et al. (2014) demonstrated that an atmosphere with 1.3-4 bar of CO2 and H2O, in addition to 5-20% H2, could have raised the mean annual and global surface temperature of early Mars above the freezing point of water. Such warm temperatures appear necessary to generate the rainfall (or snowfall) amounts required to carve the ancient martian valleys. Here, we use our best estimates for early martian outgassing rates, along with a 1-D photochemical model, to assess the conversion efficiency of CO, CH4, and H2S to CO2, SO2, and H2. Our outgassing estimates assume that Mars was actively recycling volatiles between its crust and interior, as Earth does today. H2 production from serpentinization and deposition of banded iron-formations is also considered. Under these assumptions, maintaining an H2 concentration of ~1-2% by volume is achievable, but reaching 5% H2 requires additional H2 sources or a slowing of the hydrogen escape rate below the diffusion limit. If the early martian atmosphere...

  8. Quorum sensing in CD4+ T cell homeostasis: a hypothesis and a model.

    Directory of Open Access Journals (Sweden)

    Afonso R.M. Almeida

    2012-05-01

    Full Text Available Homeostasis of lymphocyte numbers is believed to be due to competition between cellular populations for a common niche of restricted size, defined by the combination of interactions and trophic factors required for cell survival. Here we propose a new mechanism: homeostasis of lymphocyte numbers could also be achieved by the ability of lymphocytes to perceive the density of their own populations. Such a mechanism would be reminiscent of the primordial quorum sensing systems used by bacteria, in which some bacteria sense the accumulation of bacterial metabolites secreted by other elements of the population, allowing them to count the number of cells present and adapt their growth accordingly. We propose that homeostasis of CD4+ T cell numbers may occur via a quorum-sensing-like mechanism, where IL-2 is produced by activated CD4+ T cells and sensed by a population of CD4+ Treg cells that expresses the high-affinity IL-2Rα-chain and can regulate the number of activated IL-2-producing CD4+ T cells and the total CD4+T cell population. In other words, CD4+ T cell populations can restrain their growth by monitoring the number of activated cells, thus preventing uncontrolled lymphocyte proliferation during immune responses. We hypothesize that malfunction of this quorum-sensing mechanism may lead to uncontrolled T cell activation and autoimmunity. Finally, we present a mathematical model that describes the role of IL-2 and quorum-sensing mechanisms in CD4+ T cell homeostasis during an immune response.

  9. Variable number of tandem repeat polymorphisms of DRD4: re-evaluation of selection hypothesis and analysis of association with schizophrenia

    Science.gov (United States)

    Hattori, Eiji; Nakajima, Mizuho; Yamada, Kazuo; Iwayama, Yoshimi; Toyota, Tomoko; Saitou, Naruya; Yoshikawa, Takeo

    2009-01-01

    Associations have been reported between the variable number of tandem repeat (VNTR) polymorphisms in the exon 3 of dopamine D4 receptor gene gene and multiple psychiatric illnesses/traits. We examined the distribution of VNTR alleles of different length in a Japanese cohort and found that, as reported earlier, the size of allele ‘7R' was much rarer (0.5%) in Japanese than in Caucasian populations (∼20%). This presents a challenge to an earlier proposed hypothesis that positive selection favoring the allele 7R has contributed to its high frequency. To further address the issue of selection, we carried out sequencing of the VNTR region not only from human but also from chimpanzee samples, and made inference on the ancestral repeat motif and haplotype by use of a phylogenetic analysis program. The most common 4R variant was considered to be the ancestral haplotype as earlier proposed. However, in a gene tree of VNTR constructed on the basis of this inferred ancestral haplotype, the allele 7R had five descendent haplotypes in relatively long lineage, where genetic drift can have major influence. We also tested this length polymorphism for association with schizophrenia, studying two Japanese sample sets (one with 570 cases and 570 controls, and the other with 124 pedigrees). No evidence of association between the allele 7R and schizophrenia was found in any of the two data sets. Collectively, this study suggests that the VNTR variation does not have an effect large enough to cause either selection or a detectable association with schizophrenia in a study of samples of moderate size. PMID:19092778

  10. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  11. Sex ratios in the most-selective elite US undergraduate colleges and universities are consistent with the hypothesis that modern educational systems increasingly select for conscientious personality compared with intelligence.

    Science.gov (United States)

    Charlton, Bruce G

    2009-08-01

    The main predictors of examination results and educational achievement in modern societies are intelligence (IQ - or general factor 'g' intelligence) and the personality trait termed 'Conscientiousness' (C). I have previously argued that increased use of continuous assessment (e.g. course work rather than timed and supervised examinations) and increased duration of the educational process implies that modern educational systems have become increasingly selective for the personality trait of Conscientiousness and consequently less selective for IQ. I have tested this prediction (in a preliminary fashion) by looking at the sex ratios in the most selective elite US universities. My two main assumptions are: (1) that a greater proportion of individuals with very high intelligence are men than women, and (2) that women are more conscientious than men. To estimate the proportion of men and women expected at highly-selective schools, I performed demonstration calculations based on three plausible estimates of male and female IQ averages and standard deviations. The expected percentage of men at elite undergraduate colleges (selecting students with IQ above 130 - i.e. in the top 2% of the population) were 66%, 61% and 74%. When these estimates were compared with the sex ratios at 33 elite colleges and universities, only two technical institutes had more than 60% men. Elite US colleges and universities therefore seem to be selecting primarily on the basis of something other than IQ - probably conscientiousness. There is a 'missing population' of very high IQ men who are not being admitted to the most selective and prestigious undergraduate schools, probably because their high school educational qualifications and evaluations are too low. This analysis is therefore consistent with the hypothesis that modern educational systems tend to select more strongly for Conscientiousness than for IQ. The implication is that modern undergraduates at the most-selective US schools are not

  12. Bayesian variable selection for latent class models.

    Science.gov (United States)

    Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria

    2011-09-01

    In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.

  13. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect

    KAUST Repository

    Demetrius, Lloyd A.

    2015-01-14

    Epidemiological and biochemical studies show that the sporadic forms of Alzheimer\\'s disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events—mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  14. MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES

    Institute of Scientific and Technical Information of China (English)

    ZHAO Lincheng; ZHANG Hong

    2003-01-01

    In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.

  15. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  16. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  17. A Theoretical Model for Selective Exposure Research.

    Science.gov (United States)

    Roloff, Michael E.; Noland, Mark

    This study tests the basic assumptions underlying Fishbein's Model of Attitudes by correlating an individual's selective exposure to types of television programs (situation comedies, family drama, and action/adventure) with the attitudinal similarity between individual attitudes and attitudes characterized on the programs. Twenty-three college…

  18. Efficiency of model selection criteria in flood frequency analysis

    Science.gov (United States)

    Calenda, G.; Volpi, E.

    2009-04-01

    The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The

  19. Model selection for radiochromic film dosimetry

    CERN Document Server

    Méndez, Ignasi

    2015-01-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to...

  20. Portfolio Selection Model with Derivative Securities

    Institute of Scientific and Technical Information of China (English)

    王春峰; 杨建林; 蒋祥林

    2003-01-01

    Traditional portfolio theory assumes that the return rate of portfolio follows normality. However, this assumption is not true when derivative assets are incorporated. In this paper a portfolio selection model is developed based on utility function which can capture asymmetries in random variable distributions. Other realistic conditions are also considered, such as liabilities and integer decision variables. Since the resulting model is a complex mixed-integer nonlinear programming problem, simulated annealing algorithm is applied for its solution. A numerical example is given and sensitivity analysis is conducted for the model.

  1. Test the Ocean Acidification Hypothesis during the End-Permian Mass Extinction Using an Earth System Model

    Science.gov (United States)

    Cui, Y.; Kump, L.; Ridgwell, A. J.; Meyer, K. M.

    2012-12-01

    The end-Permian is associated with a 3-5‰ carbon isotope excursion in the ocean-atmosphere system within 20 kyr, which could be explained by a rapid and large amount of greenhouse gas emission. This leads to the hypothesis of ocean acidification as a primary driver for the end-Permian mass extinction event. In order to test this hypothesis, we conducted a series of experiments varying initial and boundary conditions using an Earth system model of intermediate complexity (GENIE: http://www.genie.ac.uk/). The late Permian ocean has been proposed as a "Neritan" ocean due to lack of pelagic carbonate production. We test the ocean buffering capacity to rapid CO2 emission by turning on the pelagic carbonate factory to result in a "Cretan" ocean similar to today. Due to the uncertainties on reconstructed paleo-pCO2 records, we test the model sensitivity by varying the initial pCO2, ranging from 1× PAL (preindustrial atmospheric level), 5× PAL, 10× PAL to 20× PAL. Ocean saturation state with respect to calcite (aragonite) in the Late Permian is also a key uncertainty, estimates have been varying from Ωcalcite =2.5 to supersaturated state (Ωcalcite =10) (Ridgwell 2005; Montenegro et al. 2011). We test this key uncertainty in both the "Neritan" and "Cretan" ocean cases. GENIE was spun up for >200 kyr to allow sedimentary equilibrium to ensure the weathering input balance the sediment output. Temperature-dependent silicate weathering feedback is also turned on in the model as a driver of the long-term draw down of atmospheric pCO2. We then invert the model by forcing the atmosphere δ13C to track our prescribed carbon isotopes derived from Meishan section in South China and Gartnerkofel-1 core in Alps, Austria at each time step. The two carbon isotope records are statistically treated to remove the noise that could result in unrealistic fluctuations in the derivatives of δ13C. Due to the uncertainties in the age model applied on these two records and different

  2. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  3. On Model Selection Criteria in Multimodel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.

    2008-03-21

    Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true

  4. A Neurodynamical Model for Selective Visual Attention

    Institute of Scientific and Technical Information of China (English)

    QU Jing-Yi; WANG Ru-Bin; ZHANG Yuan; DU Ying

    2011-01-01

    A neurodynamical model for selective visual attention considering orientation preference is proposed. Since orientation preference is one of the most important properties of neurons in the primary visual cortex, it should be fully considered besides external stimuli intensity. By tuning the parameter of orientation preference, the regimes of synchronous dynamics associated with the development of the attention focus are studied. The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed. Such dynamics correspond to the partial synchronization mode. Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another, which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.%A neurodynamical model for selective visual attention considering orientation preference is proposed.Since orientation preference is one of the most important properties of neurons in the primary visual cortex,it should be fully considered besides external stimuli intensity.By tuning the parameter of orientation preference,the regimes of synchronous dynamics associated with the development of the attention focus are studied.The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed.Such dynamics correspond to the partial synchronization mode.Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another,which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.Selective visual

  5. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  6. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  7. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  8. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  9. The Null Hypothesis as the Research Hypothesis.

    Science.gov (United States)

    Myers, Barbara E.; Pohlmann, John T.

    A procedure was developed within hypothesis-testing logic that allows researchers to support a hypothesis that has traditionally been the statistical or null hypothesis. Four activities involved in attainment of this goal were discussed: (1) development of statistical logic needed to define the sampling distribution associated with the hypothesis…

  10. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  11. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  12. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online....

  13. Tracking Models for Optioned Portfolio Selection

    Science.gov (United States)

    Liang, Jianfeng

    In this paper we study a target tracking problem for the portfolio selection involving options. In particular, the portfolio in question contains a stock index and some European style options on the index. A refined tracking-error-variance methodology is adopted to formulate this problem as a multi-stage optimization model. We derive the optimal solutions based on stochastic programming and optimality conditions. Attention is paid to the structure of the optimal payoff function, which is shown to possess rich properties.

  14. New insights in portfolio selection modeling

    OpenAIRE

    Zareei, Abalfazl

    2016-01-01

    Recent advancements in the field of network theory commence a new line of developments in portfolio selection techniques that stands on the ground of perceiving financial market as a network with assets as nodes and links accounting for various types of relationships among financial assets. In the first chapter, we model the shock propagation mechanism among assets via network theory and provide an approach to construct well-diversified portfolios that are resilient to shock propagation and c...

  15. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  16. Bayesian model selection in Gaussian regression

    CERN Document Server

    Abramovich, Felix

    2009-01-01

    We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.

  17. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  18. A novel mouse model of cerebral cavernous malformations based on the two-hit mutation hypothesis recapitulates the human disease.

    Science.gov (United States)

    McDonald, David A; Shenkar, Robert; Shi, Changbin; Stockton, Rebecca A; Akers, Amy L; Kucherlapati, Melanie H; Kucherlapati, Raju; Brainer, James; Ginsberg, Mark H; Awad, Issam A; Marchuk, Douglas A

    2011-01-15

    Cerebral cavernous malformations (CCMs) are vascular lesions of the central nervous system appearing as multicavernous, blood-filled capillaries, leading to headache, seizure and hemorrhagic stroke. CCM occurs either sporadically or as an autosomal dominant disorder caused by germline mutation of one of the three genes: CCM1/KRIT1, CCM2/MGC4607 and CCM3/PDCD10. Surgically resected human CCM lesions have provided molecular and immunohistochemical evidence for a two-hit (germline plus somatic) mutation mechanism. In contrast to the equivalent human genotype, mice heterozygous for a Ccm1- or Ccm2-null allele do not develop CCM lesions. Based on the two-hit hypothesis, we attempted to improve the penetrance of the model by crossing Ccm1 and Ccm2 heterozygotes into a mismatch repair-deficient Msh2(-/-) background. Ccm1(+/-)Msh2(-/-) mice exhibit CCM lesions with high penetrance as shown by magnetic resonance imaging and histology. Significantly, the CCM lesions range in size from early-stage, isolated caverns to large, multicavernous lesions. A subset of endothelial cells within the CCM lesions revealed somatic loss of CCM protein staining, supporting the two-hit mutation mechanism. The late-stage CCM lesions displayed many of the characteristics of human CCM lesions, including hemosiderin deposits, immune cell infiltration, increased endothelial cell proliferation and increased Rho-kinase activity. Some of these characteristics were also seen, but to a lesser extent, in early-stage lesions. Tight junctions were maintained between CCM lesion endothelial cells, but gaps were evident between endothelial cells and basement membrane was defective. In contrast, the Ccm2(+/-)Msh2(-/-) mice lacked cerebrovascular lesions. The CCM1 mouse model provides an in vivo tool to investigate CCM pathogenesis and new therapies.

  19. Inflation model selection meets dark radiation

    Science.gov (United States)

    Tram, Thomas; Vallance, Robert; Vennin, Vincent

    2017-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species Neff. Using a minimal dataset (Planck low-l polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptions are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when Neff is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H0 makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H0 tension would have deep consequences for inflation model selection.

  20. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  1. The Markowitz model for portfolio selection

    Directory of Open Access Journals (Sweden)

    MARIAN ZUBIA ZUBIAURRE

    2002-06-01

    Full Text Available Since its first appearance, The Markowitz model for portfolio selection has been a basic theoretical reference, opening several new development options. However, practically it has not been used among portfolio managers and investment analysts in spite of its success in the theoretical field. With our paper we would like to show how The Markowitz model may be of great help in real stock markets. Through an empirical study we want to verify the capability of Markowitz’s model to present portfolios with higher profitability and lower risk than the portfolio represented by IBEX-35 and IGBM indexes. Furthermore, we want to test suggested efficiency of these indexes as representatives of market theoretical-portfolio.

  2. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  3. Information criteria for astrophysical model selection

    CERN Document Server

    Liddle, A R

    2007-01-01

    Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

  4. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  5. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  6. Ancestral process and diffusion model with selection

    CERN Document Server

    Mano, Shuhei

    2008-01-01

    The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.

  7. Improving randomness characterization through Bayesian model selection

    CERN Document Server

    R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez

    2016-01-01

    Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...

  8. Comparative Study on the Selection Criteria for Fitting Flood Frequency Distribution Models with Emphasis on Upper-Tail Behavior

    Directory of Open Access Journals (Sweden)

    Xiaohong Chen

    2017-05-01

    Full Text Available The upper tail of a flood frequency distribution is always specifically concerned with flood control. However, different model selection criteria often give different optimal distributions when the focus is on the upper tail of distribution. With emphasis on the upper-tail behavior, five distribution selection criteria including two hypothesis tests and three information-based criteria are evaluated in selecting the best fitted distribution from eight widely used distributions by using datasets from Thames River, Wabash River, Beijiang River and Huai River. The performance of the five selection criteria is verified by using a composite criterion with focus on upper tail events. This paper demonstrated an approach for optimally selecting suitable flood frequency distributions. Results illustrate that (1 there are different selections of frequency distributions in the four rivers by using hypothesis tests and information-based criteria approaches. Hypothesis tests are more likely to choose complex, parametric models, and information-based criteria prefer to choose simple, effective models. Different selection criteria have no particular tendency toward the tail of the distribution; (2 The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. The distributions selected by information-based criteria are more likely to be close to true values than the distributions selected by hypothesis test methods in the upper tail of the frequency curve; (3 The proposed composite criterion not only can select the optimal distribution, but also can evaluate the error of estimated value, which often plays an important role in the risk assessment and engineering design. In order to decide on a particular distribution to fit the high flow, it would be better to use the composite criterion.

  9. Inflation Model Selection meets Dark Radiation

    CERN Document Server

    Tram, Thomas; Vennin, Vincent

    2016-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard $\\Lambda\\mathrm{CDM}$ model and an extension including dark radiation parametrised by its effective number of relativistic species $N_\\mathrm{eff}$. We find that the observational status of most inflationary models is unchanged, with the exception of potentials such as power-law inflation that predict a value for the scalar spectral index that is too large in $\\Lambda\\mathrm{CDM}$ but which can be accommodated when $N_\\mathrm{eff}$ is allowed to vary. In this case, cosmic microwave background data indicate that power-law inflation is one of the best models together with plateau potentials. However, contrary to plateau p...

  10. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  11. Fuzzy modelling for selecting headgear types.

    Science.gov (United States)

    Akçam, M Okan; Takada, Kenji

    2002-02-01

    The purpose of this study was to develop a computer-assisted inference model for selecting appropriate types of headgear appliance for orthodontic patients and to investigate its clinical versatility as a decision-making aid for inexperienced clinicians. Fuzzy rule bases were created for degrees of overjet, overbite, and mandibular plane angle variables, respectively, according to subjective criteria based on the clinical experience and knowledge of the authors. The rules were then transformed into membership functions and the geometric mean aggregation was performed to develop the inference model. The resultant fuzzy logic was then tested on 85 cases in which the patients had been diagnosed as requiring headgear appliances. Eight experienced orthodontists judged each of the cases, and decided if they 'agreed', 'accepted', or 'disagreed' with the recommendations of the computer system. Intra-examiner agreements were investigated using repeated judgements of a set of 30 orthodontic cases and the kappa statistic. All of the examiners exceeded a kappa score of 0.7, allowing them to participate in the test run of the validity of the proposed inference model. The examiners' agreement with the system's recommendations was evaluated statistically. The average satisfaction rate of the examiners was 95.6 per cent and, for 83 out of the 85 cases, 97.6 per cent. The majority of the examiners (i.e. six or more out of the eight) were satisfied with the recommendations of the system. Thus, the usefulness of the proposed inference logic was confirmed.

  12. SLAM: A Connectionist Model for Attention in Visual Selection Tasks.

    Science.gov (United States)

    Phaf, R. Hans; And Others

    1990-01-01

    The SeLective Attention Model (SLAM) performs visual selective attention tasks and demonstrates that object selection and attribute selection are both necessary and sufficient for visual selection. The SLAM is described, particularly with regard to its ability to represent an individual subject performing filtering tasks. (TJH)

  13. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  14. [Is there a correlation between back pain and stability of the lumbar spine in pregnancy? A model-based hypothesis].

    Science.gov (United States)

    Liebetrau, A; Puta, C; Schinowski, D; Wulf, T; Wagner, H

    2012-02-01

    During pregnancy approximately 50% of women suffer from low back pain (LBP), which significantly affects their everyday life. The pain could result in chronic insomnia, limit the pregnant women in their ability to work and produce a reduction of their physical activity. The etiology of the pain is still critically discussed and not entirely understood. In the literature different explanations for LBP are given and one of the most common reasons is the anatomical changes of the female body during pregnancy; for instance, there is an increase in the sagittal moments because of the enlarged uterus and fetus and the occurrence of hyperlordosis.The aim of this study was to describe how the anatomical changes in pregnant women affect the stability and the moments acting on the lumbar spine with the help of a simplified musculoskeletal model.A two-dimensional musculoskeletal model of the lumbar spine in the sagittal plane consisting of five lumbar vertebrae was developed. The model included five centres of rotation and three antagonistic pairs of paraspinal muscles. The concept of altered acting torques during pregnancy was explored by varying the geometrical arrangements. The situations non-pregnant, pregnant and pregnant with hyperlordosis were considered for the model-based approach. These simulations were done dependent on the stability of the erect posture and local countertorques of every lumbar segment.In spite of the simplicity of the model and the musculoskeletal arrangement it was possible to maintain equilibrium of the erect posture at every lumbar spinal segment with one minimum physiological cross-sectional area of all paraspinal muscles. The stability of the musculoskeletal system depends on the muscular activity of the paraspinal muscles and diminishing the muscular activity causes unstable lumbar segments.The relationship between the non-pregnant and the pregnant simulations demonstrated a considerable increase of acting segmental countertorques

  15. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  16. Occurrence of tributyltin (TBT)-resistant bacteria is not related to TBT pollution in Mekong River and coastal sediment: with a hypothesis of selective pressure from suspended solid.

    Science.gov (United States)

    Suehiro, Fujiyo; Mochizuki, Hiroko; Nakamura, Shinji; Iwata, Hisato; Kobayashi, Takeshi; Tanabe, Shinsuke; Fujimori, Yoshifumi; Nishimura, Fumitake; Tuyen, Bui Cach; Tana, Touch Seang; Suzuki, Satoru

    2007-07-01

    Tributyltin (TBT) is organotin compound that is toxic to aquatic life ranging from bacteria to mammals. This study examined the concentration of TBT in sediment from and near the Mekong River and the distribution of TBT-resistant bacteria. TBT concentrations ranged from TBT-resistant bacteria ranged TBT-resistant bacteria ranged from TBT in the sediment and of TBT-resistant bacteria were unrelated, and chemicals other than TBT might induce TBT resistance. TBT-resistant bacteria were more abundant in the dry season than in the rainy season. Differences in the selection process of TBT-resistant bacteria between dry and rainy seasons were examined using an advection-diffusion model of a suspended solid (SS) that conveys chemicals. The estimated dilution-diffusion time over a distance of 120 km downstream from a release site was 20 days during dry season and 5 days during rainy season, suggesting that bacteria at the sediment surface could be exposed to SS for longer periods during dry season.

  17. Physiopathological hypothesis of cellulite.

    Science.gov (United States)

    de Godoy, José Maria Pereira; de Godoy, Maria de Fátima Guerreiro

    2009-08-31

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct diagnosis of cellulite and the technique employed are fundamental to success.

  18. Physiopathological Hypothesis of Cellulite

    OpenAIRE

    de Godoy, José Maria Pereira; Godoy,Maria de Fátima Guerreiro

    2009-01-01

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct ...

  19. Neuromuscular strain as a contributor to cognitive and other symptoms in Chronic Fatigue Syndrome: Hypothesis and conceptual model.

    Directory of Open Access Journals (Sweden)

    Peter C. Rowe

    2013-05-01

    Full Text Available Individuals with chronic fatigue syndrome (CFS have heightened sensitivity and increased symptoms following various physiologic challenges, such as orthostatic stress, physical exercise, and cognitive challenges. Similar heightened sensitivity to the same stressors in fibromyalgia (FM has led investigators to propose that these findings reflect a state of central sensitivity. A large body of evidence supports the concept of central sensitivity in FM. A more modest literature provides partial support for this model in CFS, particularly with regard to pain. Nonetheless, fatigue and cognitive dysfunction have not been explained by the central sensitivity data thus far. Peripheral factors have attracted attention recently as contributors to central sensitivity. Work by Brieg, Sunderland, and others has emphasized the ability of the nervous system to undergo accommodative changes in length in response to the range of limb and trunk movements carried out during daily activity. If that ability to elongate is impaired—due to movement restrictions in tissues adjacent to nerves, or due to swelling or adhesions within the nerve itself—the result is an increase in mechanical tension within the nerve. This adverse neural tension, also termed neurodynamic dysfunction, is thought to contribute to pain and other symptoms through a variety of mechanisms. These include mechanical sensitization and altered nociceptive signaling, altered proprioception, adverse patterns of muscle recruitment and force of muscle contraction, reduced intra-neural blood flow, and release of inflammatory neuropeptides. Because it is not possible to differentiate completely between adverse neural tension and strain in muscles, fascia, and other soft tissues, we use the more general term neuromuscular strain. In our clinical work, we have found that neuromuscular restrictions are common in CFS, and that many symptoms of CFS can be reproduced by selectively adding neuromuscular strain

  20. Neuromuscular strain as a contributor to cognitive and other symptoms in chronic fatigue syndrome: hypothesis and conceptual model.

    Science.gov (United States)

    Rowe, Peter C; Fontaine, Kevin R; Violand, Richard L

    2013-01-01

    Individuals with chronic fatigue syndrome (CFS) have heightened sensitivity and increased symptoms following various physiologic challenges, such as orthostatic stress, physical exercise, and cognitive challenges. Similar heightened sensitivity to the same stressors in fibromyalgia (FM) has led investigators to propose that these findings reflect a state of central sensitivity. A large body of evidence supports the concept of central sensitivity in FM. A more modest literature provides partial support for this model in CFS, particularly with regard to pain. Nonetheless, fatigue and cognitive dysfunction have not been explained by the central sensitivity data thus far. Peripheral factors have attracted attention recently as contributors to central sensitivity. Work by Brieg, Sunderland, and others has emphasized the ability of the nervous system to undergo accommodative changes in length in response to the range of limb and trunk movements carried out during daily activity. If that ability to elongate is impaired-due to movement restrictions in tissues adjacent to nerves, or due to swelling or adhesions within the nerve itself-the result is an increase in mechanical tension within the nerve. This adverse neural tension, also termed neurodynamic dysfunction, is thought to contribute to pain and other symptoms through a variety of mechanisms. These include mechanical sensitization and altered nociceptive signaling, altered proprioception, adverse patterns of muscle recruitment and force of muscle contraction, reduced intra-neural blood flow, and release of inflammatory neuropeptides. Because it is not possible to differentiate completely between adverse neural tension and strain in muscles, fascia, and other soft tissues, we use the more general term "neuromuscular strain." In our clinical work, we have found that neuromuscular restrictions are common in CFS, and that many symptoms of CFS can be reproduced by selectively adding neuromuscular strain during the

  1. THE FRACTAL MARKET HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    FELICIA RAMONA BIRAU

    2012-05-01

    Full Text Available In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and of course, the manner in which they interpret that information may be different. Also, Fractal Market Hypothesis refers to the way that liquidity and investment horizons influence the behaviour of financial investors.

  2. Comparison between Input Hypothesis and Interaction Hypothesis

    Institute of Scientific and Technical Information of China (English)

    李佳

    2012-01-01

      Krashen’s Input hypothesis and Long’s Interaction hypothesis are both valuable research results in the field of language acquisition and play a significant role in language teaching and learning instruction. Through comparing them, their similarities lie in same goal and basis, same focus on comprehension and same challenge the traditional teaching concept. While the differences lie in Different ways to make exposure comprehensible and different roles that learners play. It is meaningful to make the compari⁃son because the results can be valuable guidance and highlights for language teachers and learners to teach or acquire a new lan⁃guage more efficiently.

  3. The detection of observations possibly influential for model selection

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1991-01-01

    textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.

  4. FROM EFFICIENT MARKET HYPOTHESIS TO BEHAVIOURAL FINANCE: CAN BEHAVIOURAL FINANCE BE THE NEW DOMINANT MODEL FOR INVESTING?

    Directory of Open Access Journals (Sweden)

    George BOROVAS

    2012-12-01

    Full Text Available The present paper reviews two fundamental investing paradigms, which have had a substantial impact on the manner investors tend to develop their own strategies. specifically, the study elaborates on efficient market hypothesis (emh, which, despite remaining most prominent and popular until the 1990s, is considered rather controversial and often disputed, and the theory of behavioural finance, which has increasingly been implemented in financial institutions. based on an extensive survey of behavioural finance and emh literature, the study demonstrates, despite any assertions, the inherent irrationality of the theory of efficient market, and discusses the potential reasons for its recent decline, arguing in favor of its replacement or co-existence with behavioural finance. in addition, the study highlights that the theory of behavioural finance, which endorses human behavioral and psychological attitudes, should become the theoretical framework for successful and profitable investing.

  5. Selective experimental review of the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are ..cap alpha../sub s/, ..cap alpha../sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, M..mu.., M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta/sub 1/, theta/sub 2/, theta/sub 3/, and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant ..cap alpha../sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring ..cap alpha../sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures.

  6. Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.

    Science.gov (United States)

    Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter

    2015-12-01

    Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments.

  7. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  8. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models....

  9. A Model for Selection of Eyespots on Butterfly Wings.

    Directory of Open Access Journals (Sweden)

    Toshio Sekimura

    Full Text Available The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins. A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not.We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions

  10. The Over-Pruning Hypothesis of Autism

    Science.gov (United States)

    Thomas, Michael S. C.; Davis, Rachael; Karmiloff-Smith, Annette; Knowland, Victoria C. P.; Charman, Tony

    2016-01-01

    This article outlines the "over-pruning hypothesis" of autism. The hypothesis originates in a neurocomputational model of the regressive sub-type (Thomas, Knowland & Karmiloff-Smith, 2011a, 2011b). Here we develop a more general version of the over-pruning hypothesis to address heterogeneity in the timing of manifestation of ASD,…

  11. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  12. RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS

    Directory of Open Access Journals (Sweden)

    Nicolae-Marius JULA

    2017-05-01

    Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.

  13. The Riemann Hypothesis

    OpenAIRE

    2007-01-01

    The Riemann Hypothesis is a conjecture made in 1859 by the great mathematician Riemann that all the complex zeros of the zeta function $\\zeta(s)$ lie on the `critical line' ${Rl} s= 1/2$. Our analysis shows that the assumption of the truth of the Riemann Hypothesis leads to a contradiction. We are therefore led to the conclusion that the Riemann Hypothesis is not true.

  14. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  15. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2016-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  16. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  17. Neurodevelopmental hypothesis of schizophrenia

    National Research Council Canada - National Science Library

    Owen, Michael J; O'Donovan, Michael C; Thapar, Anita; Craddock, Nicholas

    2011-01-01

    The neurodevelopmental hypothesis of schizophrenia provided a valuable framework that allowed a condition that usually presents with frank disorder in adolescence or early adulthood to be understood...

  18. Life Origination Hydrate Hypothesis (LOH-Hypothesis

    Directory of Open Access Journals (Sweden)

    Victor Ostrovskii

    2012-01-01

    Full Text Available The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis, according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides, DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.

  19. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  20. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  1. A lifespan observation of a novel mouse model: in vivo evidence supports aβ oligomer hypothesis.

    Directory of Open Access Journals (Sweden)

    Yichi Zhang

    Full Text Available Transgenic mouse models are powerful tools in exploring the mechanisms of AD. Most current transgenic models of AD mimic the memory impairment and the main pathologic features, among which the formation of beta-amyloid (Aβ plaques is considered a dominant pathologic event. Recently, Aβ oligomers have been identified as more neurotoxic than Aβ plaques. However, no ideal transgenic mouse model directly support Aβ oligomers as a neurotoxic species due to the puzzling effects of amyloid plaques in the more widely-used models. Here, we constructed a single-mutant transgenic (Tg model harboring the PS1V97L mutation and used Non-Tg littermates as a control group. Employing the Morris water maze, electrophysiology, immunohistochemistry, biochemistry, and electron microscopy, we investigated behavioral changes and pathology progression in our single-mutant transgenic model. We discovered the pathological alteration of intraneuronal accumulation of Aβ oligomers without Aβ plaques in the PS1V97L-Tg mouse model, which might be the result of PS1 gene mutation. Following Aβ oligomers, we detected synaptic alteration, tau hyperphosphorylation and glial activation. This model supports an initial role for Aβ oligomers in the onset of AD and suggests that Aβ plaques may not be the only prerequisite. This model provides a useful tool for studying the role of Aβ oligomers in AD pathogenesis.

  2. Cardinality constrained portfolio selection via factor models

    OpenAIRE

    Monge, Juan Francisco

    2017-01-01

    In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...

  3. Evidence accumulation as a model for lexical selection

    NARCIS (Netherlands)

    Anders, R.; Riès, S.; van Maanen, L.; Alario, F.-X.

    2015-01-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of

  4. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  5. Homer 1a and mGluR5 phosphorylation in reward-sensitive metaplasticity: A hypothesis of neuronal selection and bidirectional synaptic plasticity.

    Science.gov (United States)

    Marton, Tanya M; Hussain Shuler, Marshall G; Worley, Paul F

    2015-12-02

    Drug addiction and reward learning both involve mechanisms in which reinforcing neuromodulators participate in changing synaptic strength. For example, dopamine receptor activation modulates corticostriatal plasticity through a mechanism involving the induction of the immediate early gene Homer 1a, the phosphorylation of metabotropic glutamate receptor 5 (mGluR5)'s Homer ligand, and the enhancement of an NMDA receptor-dependent current. Inspired by hypotheses that Homer 1a functions selectively in recently-active synapses, we propose that Homer 1a is recruited by a synaptic tag to functionally discriminate between synapses that predict reward and those that do not. The involvement of Homer 1a in this mechanism further suggests that decaminutes-old firing patterns can define which synapses encode new information. Copyright © 2015. Published by Elsevier B.V.

  6. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.

  7. Can the Gateway Hypothesis, the Common Liability Model and/or, the Route of Administration Model Predict Initiation of Cannabis Use During Adolescence? A Survival Analysis-The TRAILS Study

    NARCIS (Netherlands)

    van Leeuwen, Andrea Prince; Verhulst, Frank C.; Reijneveld, Sijmen A.; Vollebergh, Wilma A. M.; Ormel, Johan; Huizink, Anja C.

    2011-01-01

    Purpose: There is substantial research linking tobacco and alcohol use to subsequent cannabis use, yet the specificity of this relationship is still under debate. The aim of this study was to examine which substance use model-the gateway hypothesis, the common liability (CL) model and/or the route o

  8. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  9. Frequency-dependent selection by wild birds promotes polymorphism in model salamanders

    Directory of Open Access Journals (Sweden)

    Shook Kim

    2009-05-01

    Full Text Available Abstract Background Co-occurrence of distinct colour forms is a classic paradox in evolutionary ecology because both selection and drift tend to remove variation from populations. Apostatic selection, the primary hypothesis for maintenance of colour polymorphism in cryptic animals, proposes that visual predators focus on common forms of prey, resulting in higher survival of rare forms. Empirical tests of this frequency-dependent foraging hypothesis are rare, and the link between predator behaviour and maintenance of variation in prey has been difficult to confirm. Here, we show that predatory birds can act as agents of frequency-dependent selection on terrestrial salamanders. Polymorphism for presence/absence of a dorsal stripe is widespread in many salamander species and its maintenance is a long-standing mystery. Results We used realistic food-bearing model salamanders to test whether selection by wild birds maintains a stripe/no-stripe polymorphism. In experimental manipulations, whichever form was most common was most likely to be attacked by ground-foraging birds, resulting in a survival advantage for the rare form. Conclusion This experiment demonstrates that frequency-dependent foraging by wild birds can maintain colour polymorphism in cryptic prey.

  10. Comparison between Input Hypothesis and Interaction Hypothesis

    Institute of Scientific and Technical Information of China (English)

    宗琦

    2016-01-01

    Second Language Acquisition has received more and more attention since 1950s when it becomes an autonomous field of research. Linguists have carried out many theoretical and empirical studies with a sharp purpose to promote Second Language Acquisition. Krashen’s Input Hypothesis and Long’s Interaction Hypothesis are most influential ones among the studies. They both play important roles in language teaching and learning. The paper will present an account of the two great theories, includ-ing the main claims, theoretical foundations as well as some related empirical works and try to investigate commons and differ-ences between them, based on literature and empirical studies. The purpose of writing this paper is to provide a clear outline of the two theories and point out how they are interrelated yet separate predictions about how second language are learned. It is meaningful because the results can be valuable guidance and highlights for language teachers and learners to teach or acquire a language better.

  11. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  12. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  13. Alzheimer’s disease: the Amyloid hypothesis and the Inverse Warburg effect

    Directory of Open Access Journals (Sweden)

    Lloyd eDemetrius

    2015-01-01

    Full Text Available Epidemiological and biochemical studies show that the sporadic forms of Alzheimer’s disease (AD are characterized by the following hallmarks : (a An exponential increase with age ; (b Selective neuronal vulnerability ; (c Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD : the amyloid hypothesis (a neuron-centric mechanism and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism. We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events – mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect, and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  14. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  15. A new hypothesis and exploratory model for the formation of large-scale inner-shelf sediment sorting and ``rippled scour depressions''

    Science.gov (United States)

    Murray, A. Brad; Thieler, E. Robert

    2004-02-01

    Recent observations of inner continental shelves in many regions show numerous collections of relatively coarse sediment, which extend kilometers in the cross-shore direction and are on the order of 100 m wide. These "rippled scour depressions" have been interpreted to indicate concentrated cross-shelf currents. However, recent observations strongly suggest that they are associated with sediment transport along-shore rather than cross-shore. A new hypothesis for the origin of these features involves the large wave-generated ripples that form in the coarse material. Wave motions interacting with these large roughness elements generate near-bed turbulence that is greatly enhanced relative to that in other areas. This enhances entrainment and inhibits settling of fine material in an area dominated by coarse sediment. The fine sediment is then carried by mean currents past the coarse accumulations, and deposited where the bed is finer. We hypothesize that these interactions constitute a feedback tending to produce accumulations of fine material separated by self-perpetuating patches of coarse sediments. As with many types of self-organized bedforms, small features would interact as they migrate, leading to a better-organized, larger-scale pattern. As an initial test of this hypothesis, we use a numerical model treating the transport of coarse and fine sediment fractions, treated as functions of the local bed composition—a proxy for the presence of large roughness elements in coarse areas. Large-scale sorted patterns exhibiting the main characteristics of the natural features result robustly in the model, indicating that this new hypothesis offers a plausible explanation for the phenomena.

  16. Artificial Neural Networks approach to pharmacokinetic model selection in DCE-MRI studies.

    Science.gov (United States)

    Mohammadian-Behbahani, Mohammad-Reza; Kamali-Asl, Ali-Reza

    2016-12-01

    In pharmacokinetic analysis of Dynamic Contrast Enhanced MRI data, a descriptive physiological model should be selected properly out of a set of candidate models. Classical techniques suggested for this purpose suffer from issues like computation time and general fitting problems. This article proposes an approach based on Artificial Neural Networks (ANNs) for solving these problems. A set of three physiologically and mathematically nested models generated from the Tofts model were assumed: Model I, II and III. These models cover three possible tissue types from normal to malignant. Using 21 experimental arterial input functions and 12 levels of noise, a set of 27,216 time traces were generated. ANN was validated and optimized by the k-fold cross validation technique. An experimental dataset of 20 patients with glioblastoma was applied to ANN and the results were compared to outputs of F-test using Dice index. Optimum neuronal architecture ([6:7:1]) and number of training epochs (50) of the ANN were determined. ANN correctly classified more than 99% of the dataset. Confusion matrices for both ANN and F-test results showed the superior performance of the ANN classifier. The average Dice index (over 20 patients) indicated a 75% similarity between model selection maps of ANN and F-test. ANN improves the model selection process by removing the need for time-consuming, problematic fitting algorithms; as well as the need for hypothesis testing. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Model selection approach suggests causal association between 25-hydroxyvitamin D and colorectal cancer.

    Directory of Open Access Journals (Sweden)

    Lina Zgaga

    Full Text Available Vitamin D deficiency has been associated with increased risk of colorectal cancer (CRC, but causal relationship has not yet been confirmed. We investigate the direction of causation between vitamin D and CRC by extending the conventional approaches to allow pleiotropic relationships and by explicitly modelling unmeasured confounders.Plasma 25-hydroxyvitamin D (25-OHD, genetic variants associated with 25-OHD and CRC, and other relevant information was available for 2645 individuals (1057 CRC cases and 1588 controls and included in the model. We investigate whether 25-OHD is likely to be causally associated with CRC, or vice versa, by selecting the best modelling hypothesis according to Bayesian predictive scores. We examine consistency for a range of prior assumptions.Model comparison showed preference for the causal association between low 25-OHD and CRC over the reverse causal hypothesis. This was confirmed for posterior mean deviances obtained for both models (11.5 natural log units in favour of the causal model, and also for deviance information criteria (DIC computed for a range of prior distributions. Overall, models ignoring hidden confounding or pleiotropy had significantly poorer DIC scores.Results suggest causal association between 25-OHD and colorectal cancer, and support the need for randomised clinical trials for further confirmations.

  18. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    Science.gov (United States)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  19. Using Dark Matter as a Guide to extend Standard Model: Dirac Similarity Principle and Minimum Higgs Hypothesis

    CERN Document Server

    Hwang, W-Y Pauchy

    2011-01-01

    We introduce the "Dirac similarity principle" that states that only those point-like Dirac particles which can interact with the Dirac electron can be observed, such as in the Standard Model. We emphasize that the existing world of the Standard Model is a Dirac world satisfying the Dirac similarity principle and believe that the immediate extension of the Standard Model will remain to be so. On the other hand, we are looking for Higgs particles for the last forty years but something is yet to be found. This leads naturally to the "minimum Higgs hypotheses". Now we know firmly that neutrinos have tiny masses, but in the minimal Standard Model there is no natural sources for such tiny masses. If nothing else, this could be taken as the clue as the signature of the existence of the extra heavy $Z^{\\prime 0}$ since it requires the extra Higgs field, that would help in generating the neutrino tiny masses. Alternatively, we may have missed the right-hand sector for some reason. A simplified version of the left-righ...

  20. Using multilevel models to quantify heterogeneity in resource selection

    Science.gov (United States)

    Wagner, T.; Diefenbach, D.R.; Christensen, S.A.; Norton, A.S.

    2011-01-01

    Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.

  1. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  2. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  3. Beyond the functional matrix hypothesis: a network null model of human skull growth for the formation of bone articulations.

    Science.gov (United States)

    Esteve-Altava, Borja; Rasskin-Gutman, Diego

    2014-09-01

    Craniofacial sutures and synchondroses form the boundaries among bones in the human skull, providing functional, developmental and evolutionary information. Bone articulations in the skull arise due to interactions between genetic regulatory mechanisms and epigenetic factors such as functional matrices (soft tissues and cranial cavities), which mediate bone growth. These matrices are largely acknowledged for their influence on shaping the bones of the skull; however, it is not fully understood to what extent functional matrices mediate the formation of bone articulations. Aiming to identify whether or not functional matrices are key developmental factors guiding the formation of bone articulations, we have built a network null model of the skull that simulates unconstrained bone growth. This null model predicts bone articulations that arise due to a process of bone growth that is uniform in rate, direction and timing. By comparing predicted articulations with the actual bone articulations of the human skull, we have identified which boundaries specifically need the presence of functional matrices for their formation. We show that functional matrices are necessary to connect facial bones, whereas an unconstrained bone growth is sufficient to connect non-facial bones. This finding challenges the role of the brain in the formation of boundaries between bones in the braincase without neglecting its effect on skull shape. Ultimately, our null model suggests where to look for modified developmental mechanisms promoting changes in bone growth patterns that could affect the development and evolution of the head skeleton. © 2014 Anatomical Society.

  4. Neuromusculoskeletal models based on the muscle synergy hypothesis for the investigation of adaptive motor control in locomotion via sensory-motor coordination.

    Science.gov (United States)

    Aoi, Shinya; Funato, Tetsuro

    2016-03-01

    Humans and animals walk adaptively in diverse situations by skillfully manipulating their complicated and redundant musculoskeletal systems. From an analysis of measured electromyographic (EMG) data, it appears that despite complicated spatiotemporal properties, muscle activation patterns can be explained by a low dimensional spatiotemporal structure. More specifically, they can be accounted for by the combination of a small number of basic activation patterns. The basic patterns and distribution weights indicate temporal and spatial structures, respectively, and the weights show the muscle sets that are activated synchronously. In addition, various locomotor behaviors have similar low dimensional structures and major differences appear in the basic patterns. These analysis results suggest that neural systems use muscle group combinations to solve motor control redundancy problems (muscle synergy hypothesis) and manipulate those basic patterns to create various locomotor functions. However, it remains unclear how the neural system controls such muscle groups and basic patterns through neuromechanical interactions in order to achieve adaptive locomotor behavior. This paper reviews simulation studies that explored adaptive motor control in locomotion via sensory-motor coordination using neuromusculoskeletal models based on the muscle synergy hypothesis. Herein, the neural mechanism in motor control related to the muscle synergy for adaptive locomotion and a potential muscle synergy analysis method including neuromusculoskeletal modeling for motor impairments and rehabilitation are discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Bayesian Model Selection for LISA Pathfinder

    CERN Document Server

    Karnesis, Nikolaos; Sopuerta, Carlos F; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; Plagnol, Eric; Vitale, and Stefano

    2013-01-01

    The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the LISA/eLISA concept. The Data Analysis (DA) team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the DA team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching to this problem is to recover the essential parameters of the LTP which describe the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate...

  6. A mixture model-based strategy for selecting sets of genes in multiclass response microarray experiments.

    Science.gov (United States)

    Broët, Philippe; Lewin, Alex; Richardson, Sylvia; Dalmasso, Cyril; Magdelenat, Henri

    2004-11-01

    Multiclass response (MCR) experiments are those in which there are more than two classes to be compared. In these experiments, though the null hypothesis is simple, there are typically many patterns of gene expression changes across the different classes that led to complex alternatives. In this paper, we propose a new strategy for selecting genes in MCR that is based on a flexible mixture model for the marginal distribution of a modified F-statistic. Using this model, false positive and negative discovery rates can be estimated and combined to produce a rule for selecting a subset of genes. Moreover, the method proposed allows calculation of these rates for any predefined subset of genes. We illustrate the performance our approach using simulated datasets and a real breast cancer microarray dataset. In this latter study, we investigate predefined subset of genes and point out interesting differences between three distinct biological pathways. http://www.bgx.org.uk/software.html

  7. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  8. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  9. Development of SPAWM: selection program for available watershed models.

    Science.gov (United States)

    Cho, Yongdeok; Roesner, Larry A

    2014-01-01

    A selection program for available watershed models (also known as SPAWM) was developed. Thirty-three commonly used watershed models were analyzed in depth and classified in accordance to their attributes. These attributes consist of: (1) land use; (2) event or continuous; (3) time steps; (4) water quality; (5) distributed or lumped; (6) subsurface; (7) overland sediment; and (8) best management practices. Each of these attributes was further classified into sub-attributes. Based on user selected sub-attributes, the most appropriate watershed model is selected from the library of watershed models. SPAWM is implemented using Excel Visual Basic and is designed for use by novices as well as by experts on watershed modeling. It ensures that the necessary sub-attributes required by the user are captured and made available in the selected watershed model.

  10. Parametric or nonparametric? A parametricness index for model selection

    CERN Document Server

    Liu, Wei; 10.1214/11-AOS899

    2012-01-01

    In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...

  11. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  12. Memory in astrocytes: a hypothesis

    Directory of Open Access Journals (Sweden)

    Caudle Robert M

    2006-01-01

    Full Text Available Abstract Background Recent work has indicated an increasingly complex role for astrocytes in the central nervous system. Astrocytes are now known to exchange information with neurons at synaptic junctions and to alter the information processing capabilities of the neurons. As an extension of this trend a hypothesis was proposed that astrocytes function to store information. To explore this idea the ion channels in biological membranes were compared to models known as cellular automata. These comparisons were made to test the hypothesis that ion channels in the membranes of astrocytes form a dynamic information storage device. Results Two dimensional cellular automata were found to behave similarly to ion channels in a membrane when they function at the boundary between order and chaos. The length of time information is stored in this class of cellular automata is exponentially related to the number of units. Therefore the length of time biological ion channels store information was plotted versus the estimated number of ion channels in the tissue. This analysis indicates that there is an exponential relationship between memory and the number of ion channels. Extrapolation of this relationship to the estimated number of ion channels in the astrocytes of a human brain indicates that memory can be stored in this system for an entire life span. Interestingly, this information is not affixed to any physical structure, but is stored as an organization of the activity of the ion channels. Further analysis of two dimensional cellular automata also demonstrates that these systems have both associative and temporal memory capabilities. Conclusion It is concluded that astrocytes may serve as a dynamic information sink for neurons. The memory in the astrocytes is stored by organizing the activity of ion channels and is not associated with a physical location such as a synapse. In order for this form of memory to be of significant duration it is necessary

  13. Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection

  14. Constraint-based model of Shewanella oneidensis MR-1 metabolism: a tool for data analysis and hypothesis generation.

    Directory of Open Access Journals (Sweden)

    Grigoriy E Pinchuk

    2010-06-01

    Full Text Available Shewanellae are gram-negative facultatively anaerobic metal-reducing bacteria commonly found in chemically (i.e., redox stratified environments. Occupying such niches requires the ability to rapidly acclimate to changes in electron donor/acceptor type and availability; hence, the ability to compete and thrive in such environments must ultimately be reflected in the organization and utilization of electron transfer networks, as well as central and peripheral carbon metabolism. To understand how Shewanella oneidensis MR-1 utilizes its resources, the metabolic network was reconstructed. The resulting network consists of 774 reactions, 783 genes, and 634 unique metabolites and contains biosynthesis pathways for all cell constituents. Using constraint-based modeling, we investigated aerobic growth of S. oneidensis MR-1 on numerous carbon sources. To achieve this, we (i used experimental data to formulate a biomass equation and estimate cellular ATP requirements, (ii developed an approach to identify cycles (such as futile cycles and circulations, (iii classified how reaction usage affects cellular growth, (iv predicted cellular biomass yields on different carbon sources and compared model predictions to experimental measurements, and (v used experimental results to refine metabolic fluxes for growth on lactate. The results revealed that aerobic lactate-grown cells of S. oneidensis MR-1 used less efficient enzymes to couple electron transport to proton motive force generation, and possibly operated at least one futile cycle involving malic enzymes. Several examples are provided whereby model predictions were validated by experimental data, in particular the role of serine hydroxymethyltransferase and glycine cleavage system in the metabolism of one-carbon units, and growth on different sources of carbon and energy. This work illustrates how integration of computational and experimental efforts facilitates the understanding of microbial metabolism at a

  15. Nonlinear Effects in Piezoelectric Transformers Explained by Thermal-Electric Model Based on a Hypothesis of Self-Heating

    DEFF Research Database (Denmark)

    Andersen, Thomas; Andersen, Michael A. E.; Thomsen, Ole Cornelius;

    2012-01-01

    As the trend within power electronic still goes in the direction of higher power density and higher efficiency, it is necessary to develop new topologies and push the limit for the existing technology. Piezoelectric transformers are a fast developing technology to improve efficiency and increase...... power density of power converters. Nonlinearities in piezoelectric transformers occur when the power density is increased enough. The simple linear equations are not valid at this point and more complex theory of electro elasticity must be applied. In This work a simplified thermo-electric model...

  16. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  17. Adapting AIC to conditional model selection

    NARCIS (Netherlands)

    M. van Ommen (Matthijs)

    2012-01-01

    textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.

  18. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn;

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  19. A Decision Model for Selecting Participants in Supply Chain

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to satisfy the rapid changing requirements of customers, enterprises must cooperate with each other to form supply chain. The first and the most important stage in the forming of supply chain is the selection of participants. The article proposes a two-staged decision model to select partners. The first stage is the inter company comparison in each business process to select highefficiency candidate based on inside variables. The next stage is to analyse the combination of different candidates in order to select the most perfect partners according to a goal-programming model.

  20. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: an animal model of petit mal epilepsy.

    Science.gov (United States)

    Vadász, C; Carpi, D; Jando, G; Kandel, A; Urioste, R; Horváth, Z; Pierre, E; Vadi, D; Fleischer, A; Buzsáki, G

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by a simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental-environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges.

  1. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: An animal model of petit mal epilepsy

    Energy Technology Data Exchange (ETDEWEB)

    Vadasz, C.; Fleischer, A. [Nathan Kline Inst. for Psychiatric Research, Orangeburg, NY (United States); Carpi, D.; Jando, G. [State Univ. of New Jersey, Newark, NJ (United States)] [and others

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental - environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges. 67 refs., 3 figs., 5 tabs.

  2. Model selection in systems biology depends on experimental design.

    Science.gov (United States)

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  3. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  4. Metabolic hypothesis for human altriciality.

    Science.gov (United States)

    Dunsworth, Holly M; Warrener, Anna G; Deacon, Terrence; Ellison, Peter T; Pontzer, Herman

    2012-09-18

    The classic anthropological hypothesis known as the "obstetrical dilemma" is a well-known explanation for human altriciality, a condition that has significant implications for human social and behavioral evolution. The hypothesis holds that antagonistic selection for a large neonatal brain and a narrow, bipedal-adapted birth canal poses a problem for childbirth; the hominin "solution" is to truncate gestation, resulting in an altricial neonate. This explanation for human altriciality based on pelvic constraints persists despite data linking human life history to that of other species. Here, we present evidence that challenges the importance of pelvic morphology and mechanics in the evolution of human gestation and altriciality. Instead, our analyses suggest that limits to maternal metabolism are the primary constraints on human gestation length and fetal growth. Although pelvic remodeling and encephalization during hominin evolution contributed to the present parturitional difficulty, there is little evidence that pelvic constraints have altered the timing of birth.

  5. Asset pricing model selection: Indonesian Stock Exchange

    OpenAIRE

    Pasaribu, Rowland Bismark Fernando

    2010-01-01

    The Capital Asset Pricing Model (CAPM) has dominated finance theory for over thirty years; it suggests that the market beta alone is sufficient to explain stock returns. However evidence shows that the cross-section of stock returns cannot be described solely by the one-factor CAPM. Therefore, the idea is to add other factors in order to complete the beta in explaining the price movements in the stock exchange. The Arbitrage Pricing Theory (APT) has been proposed as the first multifactor succ...

  6. A mixed model reduction method for preserving selected physical information

    Science.gov (United States)

    Zhang, Jing; Zheng, Gangtie

    2017-03-01

    A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.

  7. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  8. Selection of probability based weighting models for Boolean retrieval system

    Energy Technology Data Exchange (ETDEWEB)

    Ebinuma, Y. (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)

    1981-09-01

    Automatic weighting models based on probability theory were studied if they can be applied to boolean search logics including logical sum. The INIS detabase was used for searching of one particular search formula. Among sixteen models three with good ranking performance were selected. These three models were further applied to searching of nine search formulas in the same database. It was found that two models among them show slightly better average ranking performance while the other model, the simplest one, seems also practical.

  9. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  10. Sensitivity of resource selection and connectivity models to landscape definition

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2017-01-01

    Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...

  11. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  12. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2017-07-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  13. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  14. The Optimal Portfolio Selection Model under g -Expectation

    National Research Council Canada - National Science Library

    Li Li

    2014-01-01

      This paper solves the optimal portfolio selection model under the framework of the prospect theory proposed by Kahneman and Tversky in the 1970s with decision rule replaced by the g -expectation introduced by Peng...

  15. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  16. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  17. Sensor Optimization Selection Model Based on Testability Constraint

    Institute of Scientific and Technical Information of China (English)

    YANG Shuming; QIU Jing; LIU Guanjun

    2012-01-01

    Sensor selection and optimization is one of the important parts in design for testability.To address the problems that the traditional sensor optimization selection model does not take the requirements of prognostics and health management especially fault prognostics for testability into account and does not consider the impacts of sensor actual attributes on fault detectability,a novel sensor optimization selection model is proposed.Firstly,a universal architecture for sensor selection and optimization is provided.Secondly,a new testability index named fault predictable rate is defined to describe fault prognostics requirements for testability.Thirdly,a sensor selection and optimization model for prognostics and health management is constructed,which takes sensor cost as objective finction and the defined testability indexes as constraint conditions.Due to NP-hard property of the model,a generic algorithm is designed to obtain the optimal solution.At last,a case study is presented to demonstrate the sensor selection approach for a stable tracking servo platform.The application results and comparison analysis show the proposed model and algorithm are effective and feasible.This approach can be used to select sensors for prognostics and health management of any system.

  18. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  19. Modeling Suspicious Email Detection using Enhanced Feature Selection

    OpenAIRE

    2013-01-01

    The paper presents a suspicious email detection model which incorporates enhanced feature selection. In the paper we proposed the use of feature selection strategies along with classification technique for terrorists email detection. The presented model focuses on the evaluation of machine learning algorithms such as decision tree (ID3), logistic regression, Na\\"ive Bayes (NB), and Support Vector Machine (SVM) for detecting emails containing suspicious content. In the literature, various algo...

  20. RUC at TREC 2014: Select Resources Using Topic Models

    Science.gov (United States)

    2014-11-01

    them being observed (i.e. sampled). To infer the topic Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...Selection. In CIKM 2009, pages 1277-1286. [10] M. Baillie, M. Carmen, and F. Crestani. A Multiple- Collection Latent Topic Model for Federated...RUC at TREC 2014: Select Resources Using Topic Models Qiuyue Wang, Shaochen Shi, Wei Cao School of Information Renmin University of China Beijing

  1. Hypothesis--the natural selection of psoriasis.

    Science.gov (United States)

    McFadden, J P

    1990-01-01

    The high genetic frequency of some inherited disorders may in part be related to a survival advantage conferred against an environmental hazard. Psoriasis is an inherited disorder which is common amongst populations of northern latitudes. Cutaneous delayed-type hypersensitivity response to streptococcal antigen is altered in such patients with a decrease in induration and erythema. Scarlet fever has until recently been associated with a high childhood mortality, the pathogenesis of which is related to interdependent primary toxicity and secondary toxicity (including delayed-type hypersensitivity) to streptococcal antigen (erythrogenic toxin), leading to cellular damage and potentially lethal shock. Streptococcal infection, usually presenting as pharyngitis, is a classical trigger for both scarlet fever and psoriasis. Individual susceptibility to scarlet fever has been clinically assessed in the past by the Dick test--an intradermal injection of the filtrate of a broth culture of scarlatina-producing strains of Streptococcus giving an erythematous reaction at 24-48 h (Dick-positive). The degree of reaction is directly related to susceptibility to scarlet fever. The severity of and mortality from scarlet fever may be ameliorated by immunological mechanisms also found in psoriatic patients. The high prevalence of psoriasis amongst some populations today may be related to such a protective factor.

  2. Causal Inference and Model Selection in Complex Settings

    Science.gov (United States)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  3. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  4. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  5. Holotranscobalamin (HoloTC, Active-B12) and Herbert's model for the development of vitamin B12 deficiency: a review and alternative hypothesis.

    Science.gov (United States)

    Golding, Paul Henry

    2016-01-01

    The concentration of total vitamin B12 in serum is not a sufficiently sensitive or specific indicator for the reliable diagnosis of vitamin B12 deficiency. Victor Herbert proposed a model for the staged development of vitamin B12 deficiency, in which holotranscobalamin (HoloTC) is the first indicator of deficiency. Based on this model, a commercial immunoassay has been controversially promoted as a replacement for the total vitamin B12 test. HoloTC is cobalamin (vitamin B12) attached to the transport protein transcobalamin, in the serum, for delivery to cells for metabolism. Although there have been many published reports supporting the claims for HoloTC, the results of some studies were inconsistent with the claim of HoloTC as the most sensitive marker of vitamin B12 deficiency. This review examines the evidence for and against the use of HoloTC, and concludes that the HoloTC immunoassay cannot be used to measure vitamin B12 status any more reliably than total vitamin B12, or to predict the onset of a metabolic deficiency, because it is based on an erroneous hypothesis and a flawed model for the staged development of vitamin B12 deficiency. The author proposes an alternative model for the development of vitamin B12 deficiency.

  6. A novel hypothesis of blood-brain barrier (BBB development and in vitro BBB model: neural stem cell is the driver of BBB formation and maintenance

    Directory of Open Access Journals (Sweden)

    Jian Lu

    2012-02-01

    Full Text Available There is an ongoing effort to develop in vitro models for the blood-brain barrier (BBB research and the central nervous system (CNS drug screening. But the phenotypes of the existing in vitro models are still very remote from those found in vivo. The trouble in establishing in vitro BBB models comes from the unclear mechanism of the BBB formation and maintenance. The astrocytes have been found to be responsible for the maintenance of the BBB, but the studies of the CNS development have shown that the BBB formation starts largely before the gliogenesis. We hypothesize here that the neural stem cell is the real driver of the BBB formation, development and maintenance. The formation of the BBB is initiated by the neural stem cells during the earliest stage of CNS angiogenesis. The maintenance of the BBB is driven by the soluble signals produced by the neural stem cells which exist in the dentate gyrus of the hippocampus and the subventricular zone throughout the life. The brain microvascular endothelial cells (BMEC-pericyte complex is the anatomical basis of the BBB. Based on our hypothesis we suggest using the neural stem cells to induce the BMEC-pericyte complex to establish in vitro BBB models. The further research on the role of the neural stem cells in the BBB formation and maintenance may elucidate the mechanism of the BBB development. [J Exp Integr Med 2012; 2(1.000: 39-43

  7. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  8. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    Directory of Open Access Journals (Sweden)

    Feipeng Guo

    2013-10-01

    Full Text Available With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic method for attributes reduction based on rough set theory and principal component analysis was proposed which can reduce multiple attributes into some principal components, yet retaining effective evaluation information. Finally, it used improved BP neural network which has self-learning function to select partners. The empirical analysis on an agricultural enterprise shows that this model is effective and feasible for practical partner selection.

  9. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  10. The Lehman Sisters Hypothesis

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2014-01-01

    markdownabstract__Abstract__ This article explores the Lehman Sisters Hypothesis. It reviews empirical literature about gender differences in behavioral, experimental, and neuro-economics as well as in other fields of behavioral research. It discusses gender differences along three dimensions of fi

  11. Revisiting the Dutch hypothesis

    NARCIS (Netherlands)

    Postma, Dirkje S.; Weiss, Scott T.; van den Berge, Maarten; Kerstjens, Huib A. M.; Koppelman, Gerard H.

    The Dutch hypothesis was first articulated in 1961, when many novel and advanced scientific techniques were not available, such as genomics techniques for pinpointing genes, gene expression, lipid and protein profiles, and the microbiome. In addition, computed tomographic scans and advanced analysis

  12. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  13. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...... variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  14. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  15. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using

  16. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  17. Tensegrity Model Hypothesis: May This Paradigm Be Useful to Explain Hepatic and Pancreatic Carcinogenesis in Patients with Persistent Hepatitis B or Hepatitis C Virus Infection?

    Directory of Open Access Journals (Sweden)

    Sirio Fiorino

    2014-03-01

    Full Text Available Context Hepatitis B (HBV and hepatitis C virus (HCV possess well-known oncogenic properties and may promote carcinogenesis in liver. However antigens and replicative sequences of HBV/HCV have been also detected in different extrahepatic tissues, including the pancreas. Although epidemiological studies and meta-analyses have recently suggested that HBV/HCV may be also risk factors for pancreatic cancer and several researches have investigated the possible mechanisms and intra-/extra-cellular paths involved in pancreatic and hepatic carcinogenesis, to date, these complex processes remain largely unexplained. Objectives In our paper, we aimed to propose a comprehensive and qualitative hypothetical model, describing how HBV/HCV may exert their oncogenic role. Methods We performed a systematic research of scientific literature, by searching MEDLINE, the Cochrane Library and EMBASE databases. The used keywords were: “chronic HBV/HCV”, “pancreatic cancer”, “liver carcinoma”, “carcinogenesis mechanisms”, “tensional integrity”, “cytoskeleton”, and “extracellular matrix”. Results Taking advantage from available studies, we suggest an unifying hypothesis based on results and data, obtained from different areas of research. In particular we considered the well-defined model of tensional integrity and correlated it to changes induced by HBV/HCV in viscoelastic properties/stiffness of cellular/extracellular microenvironments. These events perturb the tightly-regulated feedback loop, which usually couples the intracellulargenerated forces to substrate rigidity of extracellular compartments. Therefore, such a change strongly affects intracellular functions and cellular fate, by promoting a substantial deregulation of critical intracellular biochemical activities and genome expression. Conclusions Our hypothesis might provide for the first time a reliable system, which correlates tensional integrity model with intra

  18. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  19. Periodic Integration: Further Results on Model Selection and Forecasting

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1996-01-01

    textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other

  20. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  1. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur

  2. AN EXPERT SYSTEM MODEL FOR THE SELECTION OF TECHNICAL PERSONNEL

    Directory of Open Access Journals (Sweden)

    Emine COŞGUN

    2005-03-01

    Full Text Available In this study, a model has been developed for the selection of the technical personnel. In the model Visual Basic has been used as user interface, Microsoft Access has been utilized as database system and CLIPS program has been used as expert system program. The proposed model has been developed by utilizing expert system technology. In the personnel selection process, only the pre-evaluation of the applicants has been taken into consideration. Instead of replacing the expert himself, a decision support program has been developed to analyze the data gathered from the job application forms. The attached study will assist the expert to make faster and more accurate decisions.

  3. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  4. Factor selection and structural identification in the interaction ANOVA model.

    Science.gov (United States)

    Post, Justin B; Bondell, Howard D

    2013-03-01

    When faced with categorical predictors and a continuous response, the objective of an analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times, these tasks are done separately using Analysis of Variance (ANOVA) followed by a post hoc hypothesis testing procedure such as Tukey's Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also to equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This article introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example.

  5. Factor Selection and Structural Identification in the Interaction ANOVA Model

    Science.gov (United States)

    Post, Justin B.; Bondell, Howard D.

    2013-01-01

    Summary When faced with categorical predictors and a continuous response, the objective of analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times these tasks are done separately using Analysis of Variance (ANOVA) followed by a post-hoc hypothesis testing procedure such as Tukey’s Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This paper introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example. PMID:23323643

  6. The Cambrian impact hypothesis

    CERN Document Server

    Zhang, Weijia

    2008-01-01

    After a thorough research on the circumstantial changes and the great evolution of life in the Cambrian period, the author propounds such a hypothesis: During the Late Precambrian, about 500-600Ma, a celestial body impacted the Earth. The high temperature ended the great glaciation, facilitated the communication of biological information. The rapid change of Earth environment enkindled the genesis-control system, and released the HSP-90 variations. After the impact, benefited from the protection of the new ozone layer and the energy supplement of the aerobic respiration, those survived underground life exploded. They generated carapaces and complex metabolism to adjust to the new circumstance of high temperature and high pressure. This article uses a large amount of analyses and calculations, and illustrates that this hypothesis fits well with most of the important incidences in astronomic and geologic discoveries.

  7. Evaluation of Feature Selection Methods for Predictive Modeling Using Neural Networks in Credits Scoring

    Directory of Open Access Journals (Sweden)

    Raghavendra B. K

    2010-11-01

    Full Text Available A credit-risk evaluation decision involves processing huge volumes of raw data, and hence requires powerful data mining tools. Several techniques that were developed in machine learning have been used for financial credit-risk evaluation decisions. Data mining is the process of finding patterns and relations in large databases. Neural Networks are one of the popular tools for building predictive models in data mining. The major drawback of neural network is the curse of dimensionality which requires optimal feature subset. Feature selection is an important topic of research in data mining. Feature selection is the problem of choosing a small subset of features that optimally is necessary and sufficient to describe the target concept. In this research an attempt has been made to investigate the preprocessing framework for feature selection in credit scoring using neural network. Feature selection techniques like best first search, info gain etc. methods have been evaluated for the effectiveness of the classification of the risk groups on publicly available data sets. In particular, German, Australian, and Japanese credit rating data sets have been used for evaluation. The results have been conclusive about the effectiveness of feature selection for neural networks and validate the hypothesis of the research.

  8. Whale strandings: hypothesis.

    Science.gov (United States)

    Mawson, A R

    1978-01-01

    The hypothesis is presented that whales become stranded inadvertently as a consequence of seeking stimulation. The animals enter shallow water in order to roll over, bask, and rub themselves in the sand, and are trapped by the receding tide. It suggested that stimulation-seeking behavior (and stranding) reflects a general sympathetic nervous system response which may be due to a number of factors such as pain, discomfort, reproductive state, and other biorhythmic changes.

  9. Optimal selection of autoregressive model coefficients for early damage detectability with an application to wind turbine blades

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2016-03-01

    Data-driven vibration-based damage detection techniques can be competitive because of their lower instrumentation and data analysis costs. The use of autoregressive model coefficients (ARMCs) as damage sensitive features (DSFs) is one such technique. So far, like with other DSFs, either full sets of coefficients or subsets selected by trial-and-error have been used, but this can lead to suboptimal composition of multivariate DSFs and decreased damage detection performance. This study enhances the selection of ARMCs for statistical hypothesis testing for damage presence. Two approaches for systematic ARMC selection, based on either adding or eliminating the coefficients one by one or using a genetic algorithm (GA) are proposed. The methods are applied to a numerical model of an aerodynamically excited large composite wind turbine blade with disbonding damage. The GA out performs the other selection methods and enables building multivariate DSFs that markedly enhance early damage detectability and are insensitive to measurement noise.

  10. Mesoamerican cosmovision: an hypothesis.

    Science.gov (United States)

    Franch, J. A.

    In the present conference the author explains a new hypothesis to interpret the cosmogonic vision of the people and the cultures from the Mesoamerican area during the precolumbian period. The hypothesis at issue consists in irregular octahedrical form, or as two pyramids jointed by the base in such a manner that the celestial pyramid has thirteen heavens in the form of platforms in such a way that the zenith is the seventh platform; on the contrary, the infraworld pyramid has nine platforms. The sequence of the heavens comes to an end in the number 13 heaven, or the West side of the world, that is to say the Omeyocan or the Tamoanchan, whereas the ninth infraworld is the Apochcalocan. This is the point of the intercommunication between the celestial world and the infraworld, the place of Death and Birth. In order to develop that hypothesis the author has a great number of ethnographic testimonies taken from Totonacs, Tzotziles, Mayas and, along with this, from Southamerican areas, as it is the case of the Kogi, of Colombia. The author has also considered the evidence that proceeds from the ancient codices as well as numerous samples of sculptures and reliefs, especially from the Aztec culture.

  11. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  12. Fuzzy MCDM Model for Risk Factor Selection in Construction Projects

    Directory of Open Access Journals (Sweden)

    Pejman Rezakhani

    2012-11-01

    Full Text Available Risk factor selection is an important step in a successful risk management plan. There are many risk factors in a construction project and by an effective and systematic risk selection process the most critical risks can be distinguished to have more attention. In this paper through a comprehensive literature survey, most significant risk factors in a construction project are classified in a hierarchical structure. For an effective risk factor selection, a modified rational multi criteria decision making model (MCDM is developed. This model is a consensus rule based model and has the optimization property of rational models. By applying fuzzy logic to this model, uncertainty factors in group decision making such as experts` influence weights, their preference and judgment for risk selection criteria will be assessed. Also an intelligent checking process to check the logical consistency of experts` preferences will be implemented during the decision making process. The solution inferred from this method is in the highest degree of acceptance of group members. Also consistency of individual preferences is checked by some inference rules. This is an efficient and effective approach to prioritize and select risks based on decisions made by group of experts in construction projects. The applicability of presented method is assessed through a case study.

  13. A Hybrid Program Projects Selection Model for Nonprofit TV Stations

    Directory of Open Access Journals (Sweden)

    Kuei-Lun Chang

    2015-01-01

    Full Text Available This study develops a hybrid multiple criteria decision making (MCDM model to select program projects for nonprofit TV stations on the basis of managers’ perceptions. By the concept of balanced scorecard (BSC and corporate social responsibility (CSR, we collect criteria for selecting the best program project. Fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Next, considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain the weights of them. To avoid calculation and additional pairwise comparisons of ANP, technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. A case study is presented to demonstrate the applicability of the proposed model.

  14. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  15. Dream interpretation, affect, and the theory of neuronal group selection: Freud, Winnicott, Bion, and Modell.

    Science.gov (United States)

    Shields, Walker

    2006-12-01

    The author uses a dream specimen as interpreted during psychoanalysis to illustrate Modell's hypothesis that Edelman's theory of neuronal group selection (TNGS) may provide a valuable neurobiological model for Freud's dynamic unconscious, imaginative processes in the mind, the retranscription of memory in psychoanalysis, and intersubjective processes in the analytic relationship. He draws parallels between the interpretation of the dream material with keen attention to affect-laden meanings in the evolving analytic relationship in the domain of psychoanalysis and the principles of Edelman's TNGS in the domain of neurobiology. The author notes how this correlation may underscore the importance of dream interpretation in psychoanalysis. He also suggests areas for further investigation in both realms based on study of their interplay.

  16. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  17. Bayesian model selection for constrained multivariate normal linear models

    NARCIS (Netherlands)

    Mulder, J.

    2010-01-01

    The expectations that researchers have about the structure in the data can often be formulated in terms of equality constraints and/or inequality constraints on the parameters in the model that is used. In a (M)AN(C)OVA model, researchers have expectations about the differences between the

  18. Is the fluid mosaic (and the accompanying raft hypothesis a suitable model to describe fundamental features of biological membranes? What may be missing?

    Directory of Open Access Journals (Sweden)

    Luis Alberto Bagatolli

    2013-11-01

    Full Text Available The structure, dynamics, and stability of lipid bilayers are controlled by thermodynamic forces, leading to overall tensionless membranes with a distinct lateral organization and a conspicuous lateral pressure profile. Bilayers are also subject to built-in curvature-stress instabilities that may be released locally or globally in terms of morphological changes leading to the formation of non-lamellar and curved structures. A key controller of the bilayer’s propensity to form curved structures is the average molecular shape of the different lipid molecules. Via the curvature stress, molecular shape mediates a coupling to membrane-protein function and provides a set of physical mechanisms for formation of lipid domains and laterally differentiated regions in the plane of the membrane. Unfortunately, these relevant physical features of membranes are often ignored in the most popular models for biological membranes. Results from a number of experimental and theoretical studies emphasize the significance of these fundamental physical properties and call for a refinement of the fluid mosaic model (and the accompanying raft hypothesis.

  19. Genetic signatures of natural selection in a model invasive ascidian

    Science.gov (United States)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-01-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616

  20. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  1. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  2. Mass concentration in a nonlocal model of clonal selection.

    Science.gov (United States)

    Busse, J-E; Gwiazda, P; Marciniak-Czochra, A

    2016-10-01

    Self-renewal is a constitutive property of stem cells. Testing the cancer stem cell hypothesis requires investigation of the impact of self-renewal on cancer expansion. To better understand this impact, we propose a mathematical model describing the dynamics of a continuum of cell clones structured by the self-renewal potential. The model is an extension of the finite multi-compartment models of interactions between normal and cancer cells in acute leukemias. It takes a form of a system of integro-differential equations with a nonlinear and nonlocal coupling which describes regulatory feedback loops of cell proliferation and differentiation. We show that this coupling leads to mass concentration in points corresponding to the maxima of the self-renewal potential and the solutions of the model tend asymptotically to Dirac measures multiplied by positive constants. Furthermore, using a Lyapunov function constructed for the finite dimensional counterpart of the model, we prove that the total mass of the solution converges to a globally stable equilibrium. Additionally, we show stability of the model in the space of positive Radon measures equipped with the flat metric (bounded Lipschitz distance). Analytical results are illustrated by numerical simulations.

  3. Selecting Optimal Subset of Features for Student Performance Model

    Directory of Open Access Journals (Sweden)

    Hany M. Harb

    2012-09-01

    Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.

  4. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  5. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  6. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  7. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  8. The Drift Burst Hypothesis

    DEFF Research Database (Denmark)

    Christensen, Kim; Oomen, Roel; Renò, Roberto

    The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude......, currencies and commodities. We find that the majority of identified drift bursts are accompanied by strong price reversals and these can therefore be regarded as “flash crashes” that span brief periods of severe market disruption without any material longer term price impacts....

  9. Hypothesis in Language Learning Research

    Directory of Open Access Journals (Sweden)

    Mohammad Adnan Latief

    2003-01-01

    Full Text Available Hypothesis is very often inevitable in research activities. Hypothesis is of at least three kinds, each of which should not be confused. A study trying to measure the relationship between variables can predict the finding based on theory or logical common sense. This prediction is called theoretical hypothesis. In testing hypothesis quantitatively, the theoretical hypothesis should be transformed into statistical hypothesis, which takes the form of Null hypothesis and its alternatives. It is the Null hypothesis that is to be tested to justify its rejection or otherwise its acceptance. In qualitative study, the result of first data analysis is called temporal empirical hypothesis that should be validated with more data. This cycle of rechecking the result with more data is done again and again until the hypothesis becomes the final conclusion.

  10. TIME SERIES FORECASTING WITH MULTIPLE CANDIDATE MODELS: SELECTING OR COMBINING?

    Institute of Scientific and Technical Information of China (English)

    YU Lean; WANG Shouyang; K. K. Lai; Y.Nakamori

    2005-01-01

    Various mathematical models have been commonly used in time series analysis and forecasting. In these processes, academic researchers and business practitioners often come up against two important problems. One is whether to select an appropriate modeling approach for prediction purposes or to combine these different individual approaches into a single forecast for the different/dissimilar modeling approaches. Another is whether to select the best candidate model for forecasting or to mix the various candidate models with different parameters into a new forecast for the same/similar modeling approaches. In this study, we propose a set of computational procedures to solve the above two issues via two judgmental criteria. Meanwhile, in view of the problems presented in the literature, a novel modeling technique is also proposed to overcome the drawbacks of existing combined forecasting methods. To verify the efficiency and reliability of the proposed procedure and modeling technique, the simulations and real data examples are conducted in this study.The results obtained reveal that the proposed procedure and modeling technique can be used as a feasible solution for time series forecasting with multiple candidate models.

  11. Bayesian selection of nucleotide substitution models and their site assignments.

    Science.gov (United States)

    Wu, Chieh-Hsi; Suchard, Marc A; Drummond, Alexei J

    2013-03-01

    Probabilistic inference of a phylogenetic tree from molecular sequence data is predicated on a substitution model describing the relative rates of change between character states along the tree for each site in the multiple sequence alignment. Commonly, one assumes that the substitution model is homogeneous across sites within large partitions of the alignment, assigns these partitions a priori, and then fixes their underlying substitution model to the best-fitting model from a hierarchy of named models. Here, we introduce an automatic model selection and model averaging approach within a Bayesian framework that simultaneously estimates the number of partitions, the assignment of sites to partitions, the substitution model for each partition, and the uncertainty in these selections. This new approach is implemented as an add-on to the BEAST 2 software platform. We find that this approach dramatically improves the fit of the nucleotide substitution model compared with existing approaches, and we show, using a number of example data sets, that as many as nine partitions are required to explain the heterogeneity in nucleotide substitution process across sites in a single gene analysis. In some instances, this improved modeling of the substitution process can have a measurable effect on downstream inference, including the estimated phylogeny, relative divergence times, and effective population size histories.

  12. An Integrated Model For Online shopping, Using Selective Models

    Directory of Open Access Journals (Sweden)

    Fereshteh Rabiei Dastjerdi

    Full Text Available As in traditional shopping, customer acquisition and retention are critical issues in the success of an online store. Many factors impact how, and if, customers accept online shopping. Models presented in recent years, only focus on behavioral or technolo ...

  13. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    Science.gov (United States)

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2014-12-06

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  14. Selecting global climate models for regional climate change studies

    OpenAIRE

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simula...

  15. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  16. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  17. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  18. A topic evolution model with sentiment and selective attention

    Science.gov (United States)

    Si, Xia-Meng; Wang, Wen-Dong; Zhai, Chun-Qing; Ma, Yan

    2017-04-01

    Topic evolution is a hybrid dynamics of information propagation and opinion interaction. The dynamics of opinion interaction is inherently interwoven with the dynamics of information propagation in the network, owing to the bidirectional influences between interaction and diffusion. The degree of sentiment determines if the topic can continue to spread from this node, and the selective attention determines the information flow direction and communicatee selection. For this end, we put forward a sentiment-based mixed dynamics model with selective attention, and applied the Bayesian updating rules on it. Our model can indirectly describe the isolated users who seem isolated from a topic due to some reasons even everybody around them has heard about it. Numerical simulations show that, more insiders initially and fewer simultaneous spreaders can lessen the extremism. To promote the topic diffusion or restrain the prevailing of extremism, fewer agents with constructive motivation and more agents with no involving motivation are encouraged.

  19. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. COGNITIVE INTERPRETATION OF INPUT HYPOTHESIS

    Institute of Scientific and Technical Information of China (English)

    WangHongyue; RenLiankui

    2004-01-01

    Krashen's Input Hypothesis, together with its earlier version, the Monitor Model is an influential theory in Second Language Acquisition research. In his studies, Krashen, on the one hand, emphasizes the part '“ comprehensible input” plays in learning a second language, on the other hand, he simply defines“comprehensible input” as “a little beyond the learner's current level”. What input can be considered as“a little beyond the learner's current level ”? Krashen gives no furtherexplanation. This paper tries to offer a more concrete and more detailed interpretation with Ausubel's Cognitive Assimilation theory.

  1. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  2. Measuring balance and model selection in propensity score methods

    NARCIS (Netherlands)

    Belitser, S.; Martens, Edwin P.; Pestman, Wiebe R.; Groenwold, Rolf H.H.; De Boer, Anthonius; Klungel, Olaf H.

    2011-01-01

    Background: Propensity score (PS) methods focus on balancing confounders between groups to estimate an unbiased treatment or exposure effect. However, there is lack of attention in actually measuring, reporting and using the information on balance, for instance for model selection. Objectives: To de

  3. Selecting crop models for decision making in wheat insurance

    NARCIS (Netherlands)

    Castaneda Vera, A.; Leffelaar, P.A.; Alvaro-Fuentes, J.; Cantero-Martinez, C.; Minguez, M.I.

    2015-01-01

    In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a c

  4. Cross-validation criteria for SETAR model selection

    NARCIS (Netherlands)

    de Gooijer, J.G.

    2001-01-01

    Three cross-validation criteria, denoted C, C_c, and C_u are proposed for selecting the orders of a self-exciting threshold autoregressive SETAR) model when both the delay and the threshold value are unknown. The derivatioon of C is within a natural cross-validation framework. The crietion C_c is si

  5. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    ’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  6. Selecting crop models for decision making in wheat insurance

    NARCIS (Netherlands)

    Castaneda Vera, A.; Leffelaar, P.A.; Alvaro-Fuentes, J.; Cantero-Martinez, C.; Minguez, M.I.

    2015-01-01

    In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a

  7. Accurate model selection of relaxed molecular clocks in bayesian phylogenetics.

    Science.gov (United States)

    Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J; Suchard, Marc A; Lemey, Philippe

    2013-02-01

    Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike's information criterion through Markov chain Monte Carlo (AICM), in bayesian model selection of demographic and molecular clock models. Almost simultaneously, a bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets.

  8. Rank-based model selection for multiple ions quantum tomography

    Science.gov (United States)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-10-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.

  9. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  10. Hypothesis testing in hydrology: Theory and practice

    Science.gov (United States)

    Kirchner, James; Pfister, Laurent

    2017-04-01

    Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  11. Selective refinement and selection of near-native models in protein structure prediction.

    Science.gov (United States)

    Zhang, Jiong; Barz, Bogdan; Zhang, Jingfen; Xu, Dong; Kosztin, Ioan

    2015-10-01

    In recent years in silico protein structure prediction reached a level where fully automated servers can generate large pools of near-native structures. However, the identification and further refinement of the best structures from the pool of models remain problematic. To address these issues, we have developed (i) a target-specific selective refinement (SR) protocol; and (ii) molecular dynamics (MD) simulation based ranking (SMDR) method. In SR the all-atom refinement of structures is accomplished via the Rosetta Relax protocol, subject to specific constraints determined by the size and complexity of the target. The best-refined models are selected with SMDR by testing their relative stability against gradual heating through all-atom MD simulations. Through extensive testing we have found that Mufold-MD, our fully automated protein structure prediction server updated with the SR and SMDR modules consistently outperformed its previous versions.

  12. Model selection for the extraction of movement primitives.

    Science.gov (United States)

    Endres, Dominik M; Chiovetto, Enrico; Giese, Martin A

    2013-01-01

    A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  13. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  14. How many separable sources? Model selection in independent components analysis.

    Science.gov (United States)

    Woods, Roger P; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.

  15. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  16. A Study on the Input Hypothesis and Interaction Hypothesis

    Institute of Scientific and Technical Information of China (English)

    李雪清

    2016-01-01

    In Second Language Acquisition theory, input and interaction are considered as two key factors greatly influencing the learners’acquisition rate and quality, and therefore input and interaction research has been receiving increasing attention in re-cent years. Among the large amount of research, Krashen’s input hypothesis and Long’s interaction hypothesis are perhaps most influential theories, from which most of input and interaction studies have developed. Input hypothesis claims that compre-hensible input is the only one way to acquire language, whereas interaction hypothesis argues that interaction is necessary for language acquisition. Therefore,this thesis attempts to conduct a descriptive analysis between input hypothesis and interaction hypothesis, based on their basic ideas, theoretical basis, comparisons and empirical work. It concludes that input hypothesis and interaction hypothesis succeed in interpreting the process of language acquisition to some extent, and offer both theoretical and practical inspirations on second language teaching.

  17. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...

  18. A New Approach to Model Verification, Falsification and Selection

    Directory of Open Access Journals (Sweden)

    Andrew J. Buck

    2015-06-01

    Full Text Available This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.

  19. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  20. Supplier Selection in Virtual Enterprise Model of Manufacturing Supply Network

    Science.gov (United States)

    Kaihara, Toshiya; Opadiji, Jayeola F.

    The market-based approach to manufacturing supply network planning focuses on the competitive attitudes of various enterprises in the network to generate plans that seek to maximize the throughput of the network. It is this competitive behaviour of the member units that we explore in proposing a solution model for a supplier selection problem in convergent manufacturing supply networks. We present a formulation of autonomous units of the network as trading agents in a virtual enterprise network interacting to deliver value to market consumers and discuss the effect of internal and external trading parameters on the selection of suppliers by enterprise units.

  1. A model-based approach to selection of tag SNPs

    Directory of Open Access Journals (Sweden)

    Sun Fengzhu

    2006-06-01

    Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype

  2. A hypothesis of earth quake

    CERN Document Server

    Tsai, Yeong-Shyeong

    2008-01-01

    Without a model, it is impossible for a geophysicist to study the possibility of forecasting earth quakes. In order to make a simple model, we make a hypothesis of earth quakes. The hypothesis is: (i) There are two kinds of earth quakes, one is the triggered breaking (earth quake), the other is spontaneous breaking (earth quake). (ii) Most major quakes in continental plates Eurasian Plate, North America Plate, South America Plate, Africa Plate and Australia Plate are triggered breaking. (iii) These triggered quakes are triggered by the movements of high pressure centers and low pressure centers of the atmosphere on continental plates. (iv) How can the movements of the high pressure centers trigger a quake? It depends on the extent of the high pressure center and the speed of the movement. Here, we stress high pressure center instead of low pressure center because it is dominated by high pressure center mostly. Of course, the boundary of the plates must have stored enough energy to have quakes, that is, near t...

  3. Models of cultural niche construction with selection and assortative mating.

    Science.gov (United States)

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  4. Models of cultural niche construction with selection and assortative mating.

    Directory of Open Access Journals (Sweden)

    Nicole Creanza

    Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  5. Bayesian nonparametric centered random effects models with variable selection.

    Science.gov (United States)

    Yang, Mingan

    2013-03-01

    In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.

  6. QOS Aware Formalized Model for Semantic Web Service Selection

    Directory of Open Access Journals (Sweden)

    Divya Sachan

    2014-10-01

    Full Text Available Selecting the most relevant Web Service according to a client requirement is an onerous task, as innumerous number of functionally same Web Services(WS are listed in UDDI registry. WS are functionally same but their Quality and performance varies as per service providers. A web Service Selection Process involves two major points: Recommending the pertinent Web Service and avoiding unjustifiable web service. The deficiency in keyword based searching is that it doesn’t handle the client request accurately as keyword may have ambiguous meaning on different scenarios. UDDI and search engines all are based on keyword search, which are lagging behind on pertinent Web service selection. So the search mechanism must be incorporated with the Semantic behavior of Web Services. In order to strengthen this approach, the proposed model is incorporated with Quality of Services (QoS based Ranking of semantic web services.

  7. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  8. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  9. Exploratory Bayesian model selection for serial genetics data.

    Science.gov (United States)

    Zhao, Jing X; Foulkes, Andrea S; George, Edward I

    2005-06-01

    Characterizing the process by which molecular and cellular level changes occur over time will have broad implications for clinical decision making and help further our knowledge of disease etiology across many complex diseases. However, this presents an analytic challenge due to the large number of potentially relevant biomarkers and the complex, uncharacterized relationships among them. We propose an exploratory Bayesian model selection procedure that searches for model simplicity through independence testing of multiple discrete biomarkers measured over time. Bayes factor calculations are used to identify and compare models that are best supported by the data. For large model spaces, i.e., a large number of multi-leveled biomarkers, we propose a Markov chain Monte Carlo (MCMC) stochastic search algorithm for finding promising models. We apply our procedure to explore the extent to which HIV-1 genetic changes occur independently over time.

  10. Stationary solutions for metapopulation Moran models with mutation and selection

    Science.gov (United States)

    Constable, George W. A.; McKane, Alan J.

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation.

  11. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  12. Model Selection Framework for Graph-based data

    CERN Document Server

    Caceres, Rajmonda S; Schmidt, Matthew C; Miller, Benjamin A; Campbell, William M

    2016-01-01

    Graphs are powerful abstractions for capturing complex relationships in diverse application settings. An active area of research focuses on theoretical models that define the generative mechanism of a graph. Yet given the complexity and inherent noise in real datasets, it is still very challenging to identify the best model for a given observed graph. We discuss a framework for graph model selection that leverages a long list of graph topological properties and a random forest classifier to learn and classify different graph instances. We fully characterize the discriminative power of our approach as we sweep through the parameter space of two generative models, the Erdos-Renyi and the stochastic block model. We show that our approach gets very close to known theoretical bounds and we provide insight on which topological features play a critical discriminating role.

  13. Feature selection and survival modeling in The Cancer Genome Atlas

    Directory of Open Access Journals (Sweden)

    Kim H

    2013-09-01

    Full Text Available Hyunsoo Kim,1 Markus Bredel2 1Department of Pathology, The University of Alabama at Birmingham, Birmingham, AL, USA; 2Department of Radiation Oncology, and Comprehensive Cancer Center, The University of Alabama at Birmingham, Birmingham, AL, USA Purpose: Personalized medicine is predicated on the concept of identifying subgroups of a common disease for better treatment. Identifying biomarkers that predict disease subtypes has been a major focus of biomedical science. In the era of genome-wide profiling, there is controversy as to the optimal number of genes as an input of a feature selection algorithm for survival modeling. Patients and methods: The expression profiles and outcomes of 544 patients were retrieved from The Cancer Genome Atlas. We compared four different survival prediction methods: (1 1-nearest neighbor (1-NN survival prediction method; (2 random patient selection method and a Cox-based regression method with nested cross-validation; (3 least absolute shrinkage and selection operator (LASSO optimization using whole-genome gene expression profiles; or (4 gene expression profiles of cancer pathway genes. Results: The 1-NN method performed better than the random patient selection method in terms of survival predictions, although it does not include a feature selection step. The Cox-based regression method with LASSO optimization using whole-genome gene expression data demonstrated higher survival prediction power than the 1-NN method, but was outperformed by the same method when using gene expression profiles of cancer pathway genes alone. Conclusion: The 1-NN survival prediction method may require more patients for better performance, even when omitting censored data. Using preexisting biological knowledge for survival prediction is reasonable as a means to understand the biological system of a cancer, unless the analysis goal is to identify completely unknown genes relevant to cancer biology. Keywords: brain, feature selection

  14. Ensemble feature selection integrating elitist roles and quantum game model

    Institute of Scientific and Technical Information of China (English)

    Weiping Ding; Jiandong Wang; Zhijin Guan; Quan Shi

    2015-01-01

    To accelerate the selection process of feature subsets in the rough set theory (RST), an ensemble elitist roles based quantum game (EERQG) algorithm is proposed for feature selec-tion. Firstly, the multilevel elitist roles based dynamics equilibrium strategy is established, and both immigration and emigration of elitists are able to be self-adaptive to balance between exploration and exploitation for feature selection. Secondly, the utility matrix of trust margins is introduced to the model of multilevel elitist roles to enhance various elitist roles’ performance of searching the optimal feature subsets, and the win-win utility solutions for feature selec-tion can be attained. Meanwhile, a novel ensemble quantum game strategy is designed as an intriguing exhibiting structure to perfect the dynamics equilibrium of multilevel elitist roles. Final y, the en-semble manner of multilevel elitist roles is employed to achieve the global minimal feature subset, which wil greatly improve the fea-sibility and effectiveness. Experiment results show the proposed EERQG algorithm has superiority compared to the existing feature selection algorithms.

  15. Transitions in a genotype selection model driven by coloured noises

    Institute of Scientific and Technical Information of China (English)

    Wang Can-Jun; Mei Dong-Cheng

    2008-01-01

    This paper investigates a genotype selection model subjected to both a multiplicative coloured noise and an additive coloured noise with different correlation time T1 and T2 by means of the numerical technique.By directly simulating the Langevin Equation,the following results are obtained.(1) The multiplicative coloured noise dominates,however,the effect of the additive coloured noise is not neglected in the practical gene selection process.The selection rate μ decides that the selection is propitious to gene A haploid or gene B haploid.(2) The additive coloured noise intensity α and the correlation time T2 play opposite roles.It is noted that α and T2 can not separate the single peak,while αcan make the peak disappear and T2 can make the peak be sharp.(3) The multiplicative coloured noise intensity D and the correlation time T1 can induce phase transition,at the same time they play opposite roles and the reentrance phenomenon appears.In this case,it is easy to select one type haploid from the group with increasing D and decreasing T1.

  16. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  17. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    Science.gov (United States)

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  18. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    Science.gov (United States)

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  19. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  20. Model Order Selection Rules for Covariance Structure Classification in Radar

    Science.gov (United States)

    Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre

    2017-10-01

    The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.

  1. Autoregressive model selection with simultaneous sparse coefficient estimation

    CERN Document Server

    Sang, Hailin

    2011-01-01

    In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.

  2. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  3. Structure and selection in an autocatalytic binary polymer model

    DEFF Research Database (Denmark)

    Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen

    2014-01-01

    An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from....... Stability, fluctuations, and dynamic selection mechanisms are investigated for the involved self-organizing processes. Copyright (C) EPLA, 2014......An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from...

  4. Velocity selection in the symmetric model of dendritic crystal growth

    Science.gov (United States)

    Barbieri, Angelo; Hong, Daniel C.; Langer, J. S.

    1987-01-01

    An analytic solution of the problem of velocity selection in a fully nonlocal model of dendritic crystal growth is presented. The analysis uses a WKB technique to derive and evaluate a solvability condition for the existence of steady-state needle-like solidification fronts in the limit of small under-cooling Delta. For the two-dimensional symmetric model with a capillary anisotropy of strength alpha, it is found that the velocity is proportional to (Delta to the 4th) times (alpha exp 7/4). The application of the method in three dimensions is also described.

  5. A simple application of FIC to model selection

    CERN Document Server

    Wiggins, Paul A

    2015-01-01

    We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.

  6. Small populations corrections for selection-mutation models

    CERN Document Server

    Jabin, Pierre-Emmanuel

    2012-01-01

    We consider integro-differential models describing the evolution of a population structured by a quantitative trait. Individuals interact competitively, creating a strong selection pressure on the population. On the other hand, mutations are assumed to be small. Following the formalism of Diekmann, Jabin, Mischler, and Perthame, this creates concentration phenomena, typically consisting in a sum of Dirac masses slowly evolving in time. We propose a modification to those classical models that takes the effect of small populations into accounts and corrects some abnormal behaviours.

  7. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    can compete with traditional process chains for small production runs. Combining both types of technology added cost but no benefit in this case. The new process chain model can be used to explain the results and support process selection, but process chain prototyping is still important for rapidly......This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...

  8. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  9. Bayesian Model Selection with Network Based Diffusion Analysis.

    Science.gov (United States)

    Whalen, Andrew; Hoppitt, William J E

    2016-01-01

    A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  10. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  11. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  12. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...... (standh) are the first three key terrain attributes in 5-attributes-model in all resolutions, the rest 2 of 5 attributes are Normal High (NormalH) and Valley Depth (Vall_depth) at the resolution finer than 40m, and Elevation and Channel Base (Chnl_base) coarser than 40m. The models at pixels size at 88m......As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...

  13. Active Sequential Hypothesis Testing

    CERN Document Server

    Naghshvar, Mohammad

    2012-01-01

    Consider a decision maker who is responsible to dynamically collect observations so as to enhance his information in a speedy manner about an underlying phenomena of interest while accounting for the penalty of wrong declaration. The special cases of the problem are shown to be that of variable-length coding with feedback and noisy dynamic search. Due to the sequential nature of the problem, the decision maker relies on his current information state to adaptively select the most "informative" sensing action among the available ones. In this paper, using results in dynamic programming, a lower bound for the optimal total cost is established. Moreover, upper bounds are obtained via an analysis of heuristic policies for dynamic selection of actions. It is shown that the proposed heuristics achieve asymptotic optimality in many practically relevant problems including the problems of variable-length coding with feedback and noisy dynamic search; where asymptotic optimality implies that the relative difference betw...

  14. Unifying models for X-ray selected and Radio selected BL Lac Objects

    CERN Document Server

    Fossati, G; Ghisellini, G; Maraschi, L; Brera-Merate, O A

    1997-01-01

    We discuss alternative interpretations of the differences in the Spectral Energy Distributions (SEDs) of BL Lacs found in complete Radio or X-ray surveys. A large body of observations in different bands suggests that the SEDs of BL Lac objects appearing in X-ray surveys differ from those appearing in radio surveys mainly in having a (synchrotron) spectral cut-off (or break) at much higher frequency. In order to explain the different properties of radio and X-ray selected BL Lacs Giommi and Padovani proposed a model based on a common radio luminosity function. At each radio luminosity, objects with high frequency spectral cut-offs are assumed to be a minority. Nevertheless they dominate the X-ray selected population due to the larger X-ray-to-radio-flux ratio. An alternative model explored here (reminiscent of the orientation models previously proposed) is that the X-ray luminosity function is "primary" and that at each X-ray luminosity a minority of objects has larger radio-to-X-ray flux ratio. The prediction...

  15. Bayesian model selection applied to artificial neural networks used for water resources modeling

    Science.gov (United States)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  16. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  17. Input Hypothesis and its Controversy

    Institute of Scientific and Technical Information of China (English)

    金灵

    2016-01-01

    With Krashen's proposal of input hypothesis in 1980s, lots of contributions and further researches have been done in second language acquisition and teaching. Since it is impossible to undertake the exact empirical research to investigate its credibility, lots of criticisms are also aroused to disprove or adjust this hypothesis. However, due to its significant development in SLA, it is still valuable to explore the hypothesis and implications in language teaching to non-native speakers. This paper firstly focuses on the development of the input hypothesis, and then discusses some criticisms of this hypothesis.

  18. The Hierarchical Sparse Selection Model of Visual Crowding

    Directory of Open Access Journals (Sweden)

    Wesley eChaney

    2014-09-01

    Full Text Available Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early-stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the gist of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g. specific critical spacing, spatial anisotropies, and temporal tuning, no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding— the hierarchical sparse selection (HSS model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  19. The hierarchical sparse selection model of visual crowding.

    Science.gov (United States)

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  20. Finite element model selection using Particle Swarm Optimization

    CERN Document Server

    Mthembu, Linda; Friswell, Michael I; Adhikari, Sondipon

    2009-01-01

    This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has t...

  1. The Venus Hypothesis

    CERN Document Server

    Cartwright, Annabel

    2016-01-01

    Current models indicate that Venus may have been habitable. Complex life may have evolved on the highly irradiated Venus, and transferred to Earth on asteroids. This model fits the pattern of pulses of highly developed life appearing, diversifying and going extinct with astonishing rapidity through the Cambrian and Ordovician periods, and also explains the extraordinary genetic variety which appeared over this period.

  2. An Integrative Breakage Model of genome architecture, reshuffling and evolution: The Integrative Breakage Model of genome evolution, a novel multidisciplinary hypothesis for the study of genome plasticity.

    Science.gov (United States)

    Farré, Marta; Robinson, Terence J; Ruiz-Herrera, Aurora

    2015-05-01

    Our understanding of genomic reorganization, the mechanics of genomic transmission to offspring during germ line formation, and how these structural changes contribute to the speciation process, and genetic disease is far from complete. Earlier attempts to understand the mechanism(s) and constraints that govern genome remodeling suffered from being too narrowly focused, and failed to provide a unified and encompassing view of how genomes are organized and regulated inside cells. Here, we propose a new multidisciplinary Integrative Breakage Model for the study of genome evolution. The analysis of the high-level structural organization of genomes (nucleome), together with the functional constrains that accompany genome reshuffling, provide insights into the origin and plasticity of genome organization that may assist with the detection and isolation of therapeutic targets for the treatment of complex human disorders. © 2015 WILEY Periodicals, Inc.

  3. ModelOMatic: fast and automated model selection between RY, nucleotide, amino acid, and codon substitution models.

    Science.gov (United States)

    Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David

    2015-01-01

    Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/.

  4. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  5. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    Directory of Open Access Journals (Sweden)

    Ana Pilipović

    2014-03-01

    Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.

  6. Selecting global climate models for regional climate change studies.

    Science.gov (United States)

    Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J

    2009-05-26

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.

  7. Selecting global climate models for regional climate change studies

    Science.gov (United States)

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652

  8. Multilevel selection in a resource-based model

    Science.gov (United States)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  9. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  10. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  11. Refined homology model of monoacylglycerol lipase: toward a selective inhibitor

    Science.gov (United States)

    Bowman, Anna L.; Makriyannis, Alexandros

    2009-11-01

    Monoacylglycerol lipase (MGL) is primarily responsible for the hydrolysis of 2-arachidonoylglycerol (2-AG), an endocannabinoid with full agonist activity at both cannabinoid receptors. Increased tissue 2-AG levels consequent to MGL inhibition are considered therapeutic against pain, inflammation, and neurodegenerative disorders. However, the lack of MGL structural information has hindered the development of MGL-selective inhibitors. Here, we detail a fully refined homology model of MGL which preferentially identifies MGL inhibitors over druglike noninhibitors. We include for the first time insight into the active-site geometry and potential hydrogen-bonding interactions along with molecular dynamics simulations describing the opening and closing of the MGL helical-domain lid. Docked poses of both the natural substrate and known inhibitors are detailed. A comparison of the MGL active-site to that of the other principal endocannabinoid metabolizing enzyme, fatty acid amide hydrolase, demonstrates key differences which provide crucial insight toward the design of selective MGL inhibitors as potential drugs.

  12. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  13. POSSIBILISTIC SHARPE RATIO BASED NOVICE PORTFOLIO SELECTION MODELS

    Directory of Open Access Journals (Sweden)

    Rupak Bhattacharyya

    2013-02-01

    Full Text Available This paper uses the concept of possibilistic risk aversion to propose a new approach for portfolio selection in fuzzy environment. Using possibility theory, the possibilistic mean, variance, standard deviation and risk premium of a fuzzy number are established. Possibilistic Sharpe ratio is defined as the ratio of possibilistic risk premium and possibilistic standard deviation of a portfolio. The Sharpe ratio is a measure of the performance of the portfolio compared to the risk taken. The higher the Sharpe ratio, the better the performance of the portfolio is and the greater the profits of taking risk. New models of fuzzy portfolio selection considering the possibilistic Sharpe ratio, return and skewness of the portfolio are considered. The feasibility and effectiveness of the proposed method is illustrated by numerical example extracted from Bombay Stock Exchange (BSE, India and is solved by multiple objective genetic algorithm (MOGA.

  14. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    Science.gov (United States)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  15. Model to Estimate Monthly Time Horizons for Application of DEA in Selection of Stock Portfolio and for Maintenance of the Selected Portfolio

    Directory of Open Access Journals (Sweden)

    José Claudio Isaias

    2015-01-01

    Full Text Available In the selecting of stock portfolios, one type of analysis that has shown good results is Data Envelopment Analysis (DEA. It, however, has been shown to have gaps regarding its estimates of monthly time horizons of data collection for the selection of stock portfolios and of monthly time horizons for the maintenance of a selected portfolio. To better estimate these horizons, this study proposes a model of mathematical programming binary of minimization of square errors. This model is the paper’s main contribution. The model’s results are validated by simulating the estimated annual return indexes of a portfolio that uses both horizons estimated and of other portfolios that do not use these horizons. The simulation shows that portfolios with both horizons estimated have higher indexes, on average 6.99% per year. The hypothesis tests confirm the statistically significant superiority of the results of the proposed mathematical model’s indexes. The model’s indexes are also compared with portfolios that use just one of the horizons estimated; here the indexes of the dual-horizon portfolios outperform the single-horizon portfolios, though with a decrease in percentage of statistically significant superiority.

  16. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  17. glmulti: An R Package for Easy Automated Model Selection with (Generalized Linear Models

    Directory of Open Access Journals (Sweden)

    Vincent Calcagno

    2010-10-01

    Full Text Available We introduce glmulti, an R package for automated model selection and multi-model inference with glm and related functions. From a list of explanatory variables, the provided function glmulti builds all possible unique models involving these variables and, optionally, their pairwise interactions. Restrictions can be specified for candidate models, by excluding specific terms, enforcing marginality, or controlling model complexity. Models are fitted with standard R functions like glm. The n best models and their support (e.g., (QAIC, (QAICc, or BIC are returned, allowing model selection and multi-model inference through standard R functions. The package is optimized for large candidate sets by avoiding memory limitation, facilitating parallelization and providing, in addition to exhaustive screening, a compiled genetic algorithm method. This article briefly presents the statistical framework and introduces the package, with applications to simulated and real data.

  18. Selection between foreground models for global 21-cm experiments

    CERN Document Server

    Harker, Geraint

    2015-01-01

    The precise form of the foregrounds for sky-averaged measurements of the 21-cm line during and before the epoch of reionization is unknown. We suggest that the level of complexity in the foreground models used to fit global 21-cm data should be driven by the data, under a Bayesian model selection methodology. A first test of this approach is carried out by applying nested sampling to simplified models of global 21-cm data to compute the Bayesian evidence for the models. If the foregrounds are assumed to be polynomials of order n in log-log space, we can infer the necessity to use n=4 rather than n=3 with <2h of integration with limited frequency coverage, for reasonable values of the n=4 coefficient. Using a higher-order polynomial does not necessarily prevent a significant detection of the 21-cm signal. Even for n=8, we can obtain very strong evidence distinguishing a reasonable model for the signal from a null model with 128h of integration. More subtle features of the signal may, however, be lost if the...

  19. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  20. Development of solar drying model for selected Cambodian fish species.

    Science.gov (United States)

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6 °C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg · h(-1). Based on coefficient of determination (R(2)), chi-square (χ(2)) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  1. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  2. Selection of models to calculate the LLW source term

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  3. Quantum Model for the Selectivity Filter in K$^{+}$ Ion Channel

    CERN Document Server

    Cifuentes, A A

    2013-01-01

    In this work, we present a quantum transport model for the selectivity filter in the KcsA potassium ion channel. This model is fully consistent with the fact that two conduction pathways are involved in the translocation of ions thorough the filter, and we show that the presence of a second path may actually bring advantages for the filter as a result of quantum interference. To highlight interferences and resonances in the model, we consider the selectivity filter to be driven by a controlled time-dependent external field which changes the free energy scenario and consequently the conduction of the ions. In particular, we demonstrate that the two-pathway conduction mechanism is more advantageous for the filter when dephasing in the transient configurations is lower than in the main configurations. As a matter of fact, K$^+$ ions in the main configurations are highly coordinated by oxygen atoms of the filter backbone and this increases noise. Moreover, we also show that, for a wide range of driving frequencie...

  4. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  5. A Successive Selection Method for finite element model updating

    Science.gov (United States)

    Gou, Baiyong; Zhang, Weijie; Lu, Qiuhai; Wang, Bo

    2016-03-01

    Finite Element (FE) model can be updated effectively and efficiently by using the Response Surface Method (RSM). However, it often involves performance trade-offs such as high computational cost for better accuracy or loss of efficiency for lots of design parameter updates. This paper proposes a Successive Selection Method (SSM), which is based on the linear Response Surface (RS) function and orthogonal design. SSM rewrites the linear RS function into a number of linear equations to adjust the Design of Experiment (DOE) after every FE calculation. SSM aims to interpret the implicit information provided by the FE analysis, to locate the Design of Experiment (DOE) points more quickly and accurately, and thereby to alleviate the computational burden. This paper introduces the SSM and its application, describes the solution steps of point selection for DOE in detail, and analyzes SSM's high efficiency and accuracy in the FE model updating. A numerical example of a simply supported beam and a practical example of a vehicle brake disc show that the SSM can provide higher speed and precision in FE model updating for engineering problems than traditional RSM.

  6. Selection Experiments in the Penna Model for Biological Aging

    Science.gov (United States)

    Medeiros, G.; Idiart, M. A.; de Almeida, R. M. C.

    We consider the Penna model for biological aging to investigate correlations between early fertility and late life survival rates in populations at equilibrium. We consider inherited initial reproduction ages together with a reproduction cost translated in a probability that mother and offspring die at birth, depending on the mother age. For convenient sets of parameters, the equilibrated populations present genetic variability in what regards both genetically programmed death age and initial reproduction age. In the asexual Penna model, a negative correlation between early life fertility and late life survival rates naturally emerges in the stationary solutions. In the sexual Penna model, selection experiments are performed where individuals are sorted by initial reproduction age from the equilibrated populations and the separated populations are evolved independently. After a transient, a negative correlation between early fertility and late age survival rates also emerges in the sense that populations that start reproducing earlier present smaller average genetically programmed death age. These effects appear due to the age structure of populations in the steady state solution of the evolution equations. We claim that the same demographic effects may be playing an important role in selection experiments in the laboratory.

  7. Reinforcing and expanding the predictions of the disturbance vicariance hypothesis in Amazonian harlequin frogs: A molecular phylogenetic and climate envelope modelling approach

    OpenAIRE

    Lötters, Stefan; van der Meijden, Arie; Rödder, Dennis; Köster, Timo E.; Kraus, Tanja; La Marca, Enrique; Haddad, Célio F B; Veith, Michael

    2010-01-01

    The disturbance vicariance hypothesis (DV) has been proposed to explain speciation in Amazonia, especially its edge regions, e. g. in eastern Guiana Shield harlequin frogs (Atelopus) which are suggested to have derived from a cool-adapted Andean ancestor. In concordance with DV predictions we studied that (i) these amphibians display a natural distribution gap in central Amazonia; (ii) east of this gap they constitute a monophyletic lineage which is nested within a pre-Andean/western clade; (...

  8. Expanding Schumann's Pidginization Hypothesis.

    Science.gov (United States)

    Andersen, Roger W.

    1979-01-01

    Proposes a revision and expansion of Schumann's (1978b) model of pidginization as it relates to second language learning. A distinction is made between sociocultural aspects of the pidginization cycle and the acquisitional processes of pidginization, creolization, and decreolization. (Author/AM)

  9. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  10. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  11. Parametric pattern selection in a reaction-diffusion model.

    Directory of Open Access Journals (Sweden)

    Michael Stich

    Full Text Available We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.

  12. Linear regression model selection using p-values when the model dimension grows

    CERN Document Server

    Pokarowski, Piotr; Teisseyre, Paweł

    2012-01-01

    We consider a new criterion-based approach to model selection in linear regression. Properties of selection criteria based on p-values of a likelihood ratio statistic are studied for families of linear regression models. We prove that such procedures are consistent i.e. the minimal true model is chosen with probability tending to 1 even when the number of models under consideration slowly increases with a sample size. The simulation study indicates that introduced methods perform promisingly when compared with Akaike and Bayesian Information Criteria.

  13. Is the Aluminum Hypothesis Dead?

    OpenAIRE

    Lidsky, Theodore I.

    2014-01-01

    The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed w...

  14. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models.

  15. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  16. Is the Aluminum Hypothesis dead?

    Science.gov (United States)

    Lidsky, Theodore I

    2014-05-01

    The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed with concern by some of the public. This review article discusses reasons that mainstream science has largely abandoned the Aluminum Hypothesis and explores a possible reason for some in the general public continuing to view aluminum with mistrust.

  17. Prediction of Farmers’ Income and Selection of Model ARIMA

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the research technology of scholars’ prediction of farmers’ income and the data of per capita annual net income in rural households in Henan Statistical Yearbook from 1979 to 2009,it is found that time series of farmers’ income is in accordance with I(2)non-stationary process.The order-determination and identification of the model are achieved by adopting the correlogram-based analytical method of Box-Jenkins.On the basis of comparing a group of model properties with different parameters,model ARIMA(4,2,2)is built up.The testing result shows that the residual error of the selected model is white noise and accords with the normal distribution,which can be used to predict farmers’ income.The model prediction indicates that income in rural households will continue to increase from 2009 to 2012 and will reach the value of 2 282.4,2 502.9,2 686.9 and 2 884.5 respectively.The growth speed will go down from fast to slow with weak sustainability.

  18. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    Energy Technology Data Exchange (ETDEWEB)

    PORTER, REID B. [Los Alamos National Laboratory; LOVELAND, ROHAN [Los Alamos National Laboratory; ROSTEN, ED [Los Alamos National Laboratory

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  19. Stochastic group selection model for the evolution of altruism

    CERN Document Server

    Silva, A T C; Silva, Ana T. C.

    1999-01-01

    We study numerically and analytically a stochastic group selection model in which a population of asexually reproducing individuals, each of which can be either altruist or non-altruist, is subdivided into $M$ reproductively isolated groups (demes) of size $N$. The cost associated with being altruistic is modelled by assigning the fitness $1- \\tau$, with $\\tau \\in [0,1]$, to the altruists and the fitness 1 to the non-altruists. In the case that the altruistic disadvantage $\\tau$ is not too large, we show that the finite $M$ fluctuations are small and practically do not alter the deterministic results obtained for $M \\to \\infty$. However, for large $\\tau$ these fluctuations greatly increase the instability of the altruistic demes to mutations. These results may be relevant to the dynamics of parasite-host systems and, in particular, to explain the importance of mutation in the evolution of parasite virulence.

  20. The Selection of ARIMA Models with or without Regressors

    DEFF Research Database (Denmark)

    Johansen, Søren; Riani, Marco; Atkinson, Anthony C.

    We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum...... to noise ratios. A new plot of our time series $C_{p}$ statistic is highly informative about the choice of model....... of squares of one step ahead standardized prediction errors, when the parameters are estimated, differs from the chi-squared distribution by a term which tends to infinity at a lower rate than $\\chi _{n}^{2}$. We further prove that, in the prediction error decomposition, the term involving the sum...

  1. On Model Specification and Selection of the Cox Proportional Hazards Model*

    OpenAIRE

    Lin, Chen-Yen; Halabi, Susan

    2013-01-01

    Prognosis plays a pivotal role in patient management and trial design. A useful prognostic model should correctly identify important risk factors and estimate their effects. In this article, we discuss several challenges in selecting prognostic factors and estimating their effects using the Cox proportional hazards model. Although a flexible semiparametric form, the Cox’s model is not entirely exempt from model misspecification. To minimize possible misspecification, instead of imposing tradi...

  2. Radial Domany-Kinzel models with mutation and selection

    Science.gov (United States)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  3. Ultrastructural model for size selectivity in glomerular filtration.

    Science.gov (United States)

    Edwards, A; Daniels, B S; Deen, W M

    1999-06-01

    A theoretical model was developed to relate the size selectivity of the glomerular barrier to the structural characteristics of the individual layers of the capillary wall. Thicknesses and other linear dimensions were evaluated, where possible, from previous electron microscopic studies. The glomerular basement membrane (GBM) was represented as a homogeneous material characterized by a Darcy permeability and by size-dependent hindrance coefficients for diffusion and convection, respectively; those coefficients were estimated from recent data obtained with isolated rat GBM. The filtration slit diaphragm was modeled as a single row of cylindrical fibers of equal radius but nonuniform spacing. The resistances of the remainder of the slit channel, and of the endothelial fenestrae, to macromolecule movement were calculated to be negligible. The slit diaphragm was found to be the most restrictive part of the barrier. Because of that, macromolecule concentrations in the GBM increased, rather than decreased, in the direction of flow. Thus the overall sieving coefficient (ratio of Bowman's space concentration to that in plasma) was predicted to be larger for the intact capillary wall than for a hypothetical structure with no GBM. In other words, because the slit diaphragm and GBM do not act independently, the overall sieving coefficient is not simply the product of those for GBM alone and the slit diaphragm alone. Whereas the calculated sieving coefficients were sensitive to the structural features of the slit diaphragm and to the GBM hindrance coefficients, variations in GBM thickness or filtration slit frequency were predicted to have little effect. The ability of the ultrastructural model to represent fractional clearance data in vivo was at least equal to that of conventional pore models with the same number of adjustable parameters. The main strength of the present approach, however, is that it provides a framework for relating structural findings to the size

  4. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  5. Modeling selective elimination of quiescent cancer cells from bone marrow.

    Science.gov (United States)

    Cavnar, Stephen P; Rickelmann, Andrew D; Meguiar, Kaille F; Xiao, Annie; Dosch, Joseph; Leung, Brendan M; Cai Lesher-Perez, Sasha; Chitta, Shashank; Luker, Kathryn E; Takayama, Shuichi; Luker, Gary D

    2015-08-01

    Patients with many types of malignancy commonly harbor quiescent disseminated tumor cells in bone marrow. These cells frequently resist chemotherapy and may persist for years before proliferating as recurrent metastases. To test for compounds that eliminate quiescent cancer cells, we established a new 384-well 3D spheroid model in which small numbers of cancer cells reversibly arrest in G1/G0 phase of the cell cycle when cultured with bone marrow stromal cells. Using dual-color bioluminescence imaging to selectively quantify viability of cancer and stromal cells in the same spheroid, we identified single compounds and combination treatments that preferentially eliminated quiescent breast cancer cells but not stromal cells. A treatment combination effective against malignant cells in spheroids also eliminated breast cancer cells from bone marrow in a mouse xenograft model. This research establishes a novel screening platform for therapies that selectively target quiescent tumor cells, facilitating identification of new drugs to prevent recurrent cancer. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  7. The curse of the pharaoh hypothesis.

    Science.gov (United States)

    Gandon, S

    1998-08-22

    The 'curse of the pharaoh' has been used as a metaphor for the hypothesis that higher parasite propagule survival selects for higher virulence. Indeed, the mysterious death of Lord Carnavon after entering the tomb of the Egyptian pharaoh Tutankhamen could potentially be explained by an infection with a highly virulent and very long-lived pathogen. In this paper, I investigate whether parasite virulence increases with high propagule survival. In this respect, I derive an analytic expression of the evolutionarily stable level of parasite virulence as a function of propagule survival rate when the host-parasite system has reached a stable ecological equilibrium. This result shows that, if multiple infection occurs, higher propagule survival generally increases parasite virulence. This effect is enhanced when parasite dispersal coevolves with parasite virulence. In a more general perspective, the model shows the importance of taking into account the combination of direct and indirect effects (which I call inclusive effects) of higher transmission ability on the evolution of parasite virulence. The recognition of these effects has several practical implications for virulence management.

  8. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  9. Bayesian Model Selection With Network Based Diffusion Analysis

    Directory of Open Access Journals (Sweden)

    Andrew eWhalen

    2016-04-01

    Full Text Available A number of recent studies have used Network Based Diffusion Analysis (NBDA to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA. To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  10. Riemann hypothesis is not correct

    OpenAIRE

    2014-01-01

    This paper use Nevanlinna's Second Main Theorem of the value distribution theory, we got an important conclusion by Riemann hypothesis. this conclusion contradicts the Theorem 8.12 in Titchmarsh's book "Theory of the Riemann Zeta-functions", therefore we prove that Riemann hypothesis is incorrect.

  11. Gaussian Mixture Models and Model Selection for [18F] Fluorodeoxyglucose Positron Emission Tomography Classification in Alzheimer's Disease.

    Directory of Open Access Journals (Sweden)

    Rui Li

    Full Text Available We present a method to discover discriminative brain metabolism patterns in [18F] fluorodeoxyglucose positron emission tomography (PET scans, facilitating the clinical diagnosis of Alzheimer's disease. In the work, the term "pattern" stands for a certain brain region that characterizes a target group of patients and can be used for a classification as well as interpretation purposes. Thus, it can be understood as a so-called "region of interest (ROI". In the literature, an ROI is often found by a given brain atlas that defines a number of brain regions, which corresponds to an anatomical approach. The present work introduces a semi-data-driven approach that is based on learning the characteristics of the given data, given some prior anatomical knowledge. A Gaussian Mixture Model (GMM and model selection are combined to return a clustering of voxels that may serve for the definition of ROIs. Experiments on both an in-house dataset and data of the Alzheimer's Disease Neuroimaging Initiative (ADNI suggest that the proposed approach arrives at a better diagnosis than a merely anatomical approach or conventional statistical hypothesis testing.

  12. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  13. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  14. The curse of the pharaoh hypothesis.

    OpenAIRE

    Gandon, S.

    1998-01-01

    The 'curse of the pharaoh' has been used as a metaphor for the hypothesis that higher parasite propagule survival selects for higher virulence. Indeed, the mysterious death of Lord Carnavon after entering the tomb of the Egyptian pharaoh Tutankhamen could potentially be explained by an infection with a highly virulent and very long-lived pathogen. In this paper, I investigate whether parasite virulence increases with high propagule survival. In this respect, I derive an analytic expression of...

  15. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  16. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...... as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation...

  17. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  18. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  19. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  20. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming–Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  1. A new hypothesis of drug refractory epilepsy: neural network hypothesis.

    Science.gov (United States)

    Fang, Min; Xi, Zhi-Qin; Wu, Yuan; Wang, Xue-Feng

    2011-06-01

    Drug refractory is an important clinical problem in epilepsy, affecting a substantial number of patients globally. Mechanisms underlying drug refractory need to be understood to develop rational therapies. Current two prevailing theories on drug refractory epilepsy (DRE) include the target hypothesis and the transporter hypothesis. However, those hypotheses could not be adequate to explain the mechanisms of all the DRE. Thus, we propose another possible mechanism of DRE, which is neural network hypothesis. It is hypothesized that seizure-induced alterations of brain plasticity including axonal sprouting, synaptic reorganization, neurogenesis and gliosis could contribute to the formation of abnormal neural network, which has not only avoided the inhibitory effect of endogenous antiepileptic system but also prevented the traditional antiepileptic drugs from entering their targets, eventually leading to DRE. We will illustrate this hypothesis at molecular and structural level based on our recent studies and other related researches.

  2. Robustness and epistasis in mutation-selection models

    Science.gov (United States)

    Wolff, Andrea; Krug, Joachim

    2009-09-01

    We investigate the fitness advantage associated with the robustness of a phenotype against deleterious mutations using deterministic mutation-selection models of a quasispecies type equipped with a mesa-shaped fitness landscape. We obtain analytic results for the robustness effect which become exact in the limit of infinite sequence length. Thereby, we are able to clarify a seeming contradiction between recent rigorous work and an earlier heuristic treatment based on mapping to a Schrödinger equation. We exploit the quantum mechanical analogy to calculate a correction term for finite sequence lengths and verify our analytic results by numerical studies. In addition, we investigate the occurrence of an error threshold for a general class of epistatic landscapes and show that diminishing epistasis is a necessary but not sufficient condition for error threshold behaviour.

  3. Model catalysis by size-selected cluster deposition

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  4. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R [ORNL; Nutaro, James J [ORNL

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.

  5. Model selection by LASSO methods in a change-point model

    CERN Document Server

    Ciuperca, Gabriela

    2011-01-01

    The paper considers a linear regression model with multiple change-points occurring at unknown times. The LASSO technique is very interesting since it allows the parametric estimation, including the change-points, and automatic variable selection simultaneously. The asymptotic properties of the LASSO-type (which has as particular case the LASSO estimator) and of the adaptive LASSO estimators are studied. For this last estimator the oracle properties are proved. In both cases, a model selection criterion is proposed. Numerical examples are provided showing the performances of the adaptive LASSO estimator compared to the LS estimator.

  6. The fighting hypothesis in combat : How well does the fighting hypothesis explain human left-handed minorities?

    NARCIS (Netherlands)

    Groothuis, Ton G.G.; McManus, I.C.; Schaafsma, Sara M.; Geuze, Reint H.; McGrew, WC; Schiefenhovel, W; Marchant, LF

    2013-01-01

    The strong population bias in hand preference in favor of right-handedness seems to be a typical human trait. An elegant evolutionary hypothesis explaining this trait is the so-called fighting hypothesis that postulates that left-handedness is under frequency-dependent selection. The fighting hypoth

  7. Chain-Wise Generalization of Road Networks Using Model Selection

    Science.gov (United States)

    Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.

    2017-05-01

    Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  8. A simple model of group selection that cannot be analyzed with inclusive fitness

    NARCIS (Netherlands)

    M. van Veelen; S. Luo; B. Simon

    2014-01-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models,

  9. Sleep memory processing: the sequential hypothesis

    Directory of Open Access Journals (Sweden)

    Antonio eGiuditta

    2014-12-01

    Full Text Available According to the sequential hypothesis (SH memories acquired during wakefulness are processed during sleep in two serial steps respectively occurring during slow wave sleep (SWS and REM sleep. During SWS memories to be retained are distinguished from irrelevant or competing traces that undergo downgrading or elimination. Processed memories are stored again during REM sleep which integrates them with preexisting memories. The hypothesis received support from a wealth of EEG, behavioral, and biochemical analyses of trained rats. Further evidence was provided by independent studies of human subjects. SH basic premises, data, and interpretations have been compared with corresponding viewpoints of the synaptic homeostatic hypothesis (SHY. Their similarities and differences are presented and discussed within the framework of sleep processing operations. SHY’s emphasis on synaptic renormalization during SWS is acknowledged to underline a key sleep effect, but this cannot marginalize sleep’s main role in selecting memories to be retained from downgrading traces, and in their integration with preexisting memories. In addition, SHY’s synaptic renormalization raises an unsolved dilemma that clashes with the accepted memory storage mechanism exclusively based on modifications of synaptic strength. This difficulty may be bypassed by the assumption that SWS-processed memories are stored again by REM sleep in brain subnuclear quantum particles. Storing of memories in quantum particles may also occur in other vigilance states. Hints are provided on ways to subject the quantum hypothesis to experimental tests.

  10. The absorber hypothesis of electrodynamics

    OpenAIRE

    De Luca, Jayme

    2008-01-01

    We test the absorber hypothesis of the action-at-a-distance electrodynamics for globally-bounded solutions of a finite-particle universe. We find that the absorber hypothesis forbids globally-bounded motions for a universe containing only two charged particles, otherwise the condition alone does not forbid globally-bounded motions. We discuss the implication of our results for the various forms of electrodynamics of point charges.

  11. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  12. Empirical evaluation of scoring functions for Bayesian network model selection.

    Science.gov (United States)

    Liu, Zhifa; Malone, Brandon; Yuan, Changhe

    2012-01-01

    In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also

  13. Using nonlinear models in fMRI data analysis: model selection and activation detection.

    Science.gov (United States)

    Deneux, Thomas; Faugeras, Olivier

    2006-10-01

    There is an increasing interest in using physiologically plausible models in fMRI analysis. These models do raise new mathematical problems in terms of parameter estimation and interpretation of the measured data. In this paper, we show how to use physiological models to map and analyze brain activity from fMRI data. We describe a maximum likelihood parameter estimation algorithm and a statistical test that allow the following two actions: selecting the most statistically significant hemodynamic model for the measured data and deriving activation maps based on such model. Furthermore, as parameter estimation may leave much incertitude on the exact values of parameters, model identifiability characterization is a particular focus of our work. We applied these methods to different variations of the Balloon Model (Buxton, R.B., Wang, E.C., and Frank, L.R. 1998. Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magn. Reson. Med. 39: 855-864; Buxton, R.B., Uludağ, K., Dubowitz, D.J., and Liu, T.T. 2004. Modelling the hemodynamic response to brain activation. NeuroImage 23: 220-233; Friston, K. J., Mechelli, A., Turner, R., and Price, C. J. 2000. Nonlinear responses in fMRI: the balloon model, volterra kernels, and other hemodynamics. NeuroImage 12: 466-477) in a visual perception checkerboard experiment. Our model selection proved that hemodynamic models better explain the BOLD response than linear convolution, in particular because they are able to capture some features like poststimulus undershoot or nonlinear effects. On the other hand, nonlinear and linear models are comparable when signals get noisier, which explains that activation maps obtained in both frameworks are comparable. The tools we have developed prove that statistical inference methods used in the framework of the General Linear Model might be generalized to nonlinear models.

  14. Effects of Parceling on Model Selection: Parcel-Allocation Variability in Model Ranking.

    Science.gov (United States)

    Sterba, Sonya K; Rights, Jason D

    2016-01-25

    Research interest often lies in comparing structural model specifications implying different relationships among latent factors. In this context parceling is commonly accepted, assuming the item-level measurement structure is well known and, conservatively, assuming items are unidimensional in the population. Under these assumptions, researchers compare competing structural models, each specified using the same parcel-level measurement model. However, little is known about consequences of parceling for model selection in this context-including whether and when model ranking could vary across alternative item-to-parcel allocations within-sample. This article first provides a theoretical framework that predicts the occurrence of parcel-allocation variability (PAV) in model selection index values and its consequences for PAV in ranking of competing structural models. These predictions are then investigated via simulation. We show that conditions known to manifest PAV in absolute fit of a single model may or may not manifest PAV in model ranking. Thus, one cannot assume that low PAV in absolute fit implies a lack of PAV in ranking, and vice versa. PAV in ranking is shown to occur under a variety of conditions, including large samples. To provide an empirically supported strategy for selecting a model when PAV in ranking exists, we draw on relationships between structural model rankings in parcel- versus item-solutions. This strategy employs the across-allocation modal ranking. We developed software tools for implementing this strategy in practice, and illustrate them with an example. Even if a researcher has substantive reason to prefer one particular allocation, investigating PAV in ranking within-sample still provides an informative sensitivity analysis.

  15. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  16. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  17. Brain Regions Engaged by Part- and Whole-Task Performance in a Video Game: A Model-Based Test of the Decomposition Hypothesis

    Science.gov (United States)

    Anderson, John R.; Bothell, Daniel; Fincham, Jon M.; Anderson, Abraham R.; Poole, Ben; Qin, Yulin

    2011-01-01

    Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain…

  18. Can Methicillin-resistant Staphylococcus aureus Silently Travel From the Gut to the Wound and Cause Postoperative Infection? Modeling the "Trojan Horse Hypothesis".

    Science.gov (United States)

    Krezalek, Monika A; Hyoju, Sanjiv; Zaborin, Alexander; Okafor, Emeka; Chandrasekar, Laxmi; Bindokas, Vitas; Guyton, Kristina; Montgomery, Christopher P; Daum, Robert S; Zaborina, Olga; Boyle-Vavra, Susan; Alverdy, John C

    2017-02-09

    To determine whether intestinal colonization with methicillin-resistant Staphylococcus aureus (MRSA) can be the source of surgical site infections (SSIs). We hypothesized that gut-derived MRSA may cause SSIs via mechanisms in which circulating immune cells scavenge MRSA from the gut, home to surgical wounds, and cause infection (Trojan Horse Hypothesis). MRSA gut colonization was achieved by disrupting the microbiota with antibiotics, imposing a period of starvation and introducing MRSA via gavage. Next, mice were subjected to a surgical injury (30% hepatectomy) and rectus muscle injury and ischemia before skin closure. All wounds were cultured before skin closure. To control for postoperative wound contamination, reiterative experiments were performed in mice in which the closed wound was painted with live MRSA for 2 consecutive postoperative days. To rule out extracellular bacteremia as a cause of wound infection, MRSA was injected intravenously in mice subjected to rectus muscle ischemia and injury. All wound cultures were negative before skin closure, ruling out intraoperative contamination. Out of 40 mice, 4 (10%) developed visible abscesses. Nine mice (22.5%) had MRSA positive cultures of the rectus muscle without visible abscesses. No SSIs were observed in mice injected intravenously with MRSA. Wounds painted with MRSA after closure did not develop infections. Circulating neutrophils from mice captured by flow cytometry demonstrated MRSA in their cytoplasm. Immune cells as Trojan horses carrying gut-derived MRSA may be a plausible mechanism of SSIs in the absence of direct contamination.

  19. Hypothesis-driven physical examination curriculum.

    Science.gov (United States)

    Allen, Sharon; Olson, Andrew; Menk, Jeremiah; Nixon, James

    2016-12-09

    Medical students traditionally learn physical examination skills as a rote list of manoeuvres. Alternatives like hypothesis-driven physical examination (HDPE) may promote students' understanding of the contribution of physical examination to diagnostic reasoning. We sought to determine whether first-year medical students can effectively learn to perform a physical examination using an HDPE approach, and then tailor the examination to specific clinical scenarios. Medical students traditionally learn physical examination skills as a rote list of manoeuvres CONTEXT: First-year medical students at the University of Minnesota were taught both traditional and HDPE approaches during a required 17-week clinical skills course in their first semester. The end-of-course evaluation assessed HDPE skills: students were assigned one of two cardiopulmonary cases. Each case included two diagnostic hypotheses. During an interaction with a standardised patient, students were asked to select physical examination manoeuvres in order to make a final diagnosis. Items were weighted and selection order was recorded. First-year students with minimal pathophysiology performed well. All students selected the correct diagnosis. Importantly, students varied the order when selecting examination manoeuvres depending on the diagnoses under consideration, demonstrating early clinical decision-making skills. An early introduction to HDPE may reinforce physical examination skills for hypothesis generation and testing, and can foster early clinical decision-making skills. This has important implications for further research in physical examination instruction. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  20. The Lande-Kirkpatrick mechanism is the null model of evolution by intersexual selection: implications for meaning, honesty, and design in intersexual signals.

    Science.gov (United States)

    Prum, Richard O

    2010-11-01

    The Fisher-inspired, arbitrary intersexual selection models of Lande (1981) and Kirkpatrick (1982), including both stable and unstable equilibrium conditions, provide the appropriate null model for the evolution of traits and preferences by intersexual selection. Like the Hardy–Weinberg equilibrium, the Lande–Kirkpatrick (LK) mechanism arises as an intrinsic consequence of genetic variation in trait and preference in the absence of other evolutionary forces. The LK mechanism is equivalent to other intersexual selection mechanisms in the absence of additional selection on preference and with additional trait-viability and preference-viability correlations equal to zero. The LK null model predicts the evolution of arbitrary display traits that are neither honest nor dishonest, indicate nothing other than mating availability, and lack any meaning or design other than their potential to correspond to mating preferences. The current standard for demonstrating an arbitrary trait is impossible to meet because it requires proof of the null hypothesis. The LK null model makes distinct predictions about the evolvability of traits and preferences. Examples of recent intersexual selection research document the confirmationist pitfalls of lacking a null model. Incorporation of the LK null into intersexual selection will contribute to serious examination of the extent to which natural selection on preferences shapes signals.

  1. Roger Sperry and his chemoaffinity hypothesis.

    Science.gov (United States)

    Meyer, R L

    1998-10-01

    In the early 1940s, Roger Sperry performed a series of insightful experiments on the visual system of lower vertebrates that led him to draw two important conclusions: When optic fibers were severed, the regenerating fibers grew back to their original loci in the midbrain tectum to re-establish a topographical set of connections; and the re-establishment of these orderly connections underlay the orderly behavior of the animal. From these conclusions, he inferred that each optic fiber and each tectal neuron possessed cytochemical labels that uniquely denoted their neuronal type and position and that optic fibers could utilize these labels to selectively navigate to their matching target cell. This inference was subsequently formulated into a general explanation of how neurons form ordered interconnections during development and became known as the chemoaffinity hypothesis. The origins of this hypothesis, the controversies that surrounded it for several decades and its eventual acceptance, are discussed in this article.

  2. Continuous time limits of the Utterance Selection Model

    CERN Document Server

    Michaud, Jérôme

    2016-01-01

    In this paper, we derive new continuous time limits of the Utterance Selection Model (USM) for language change (Baxter et al., Phys. Rev. E {\\bf 73}, 046118, 2006). This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a new continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, can not be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the \\emph{heterogeneous mean field} approximation. This approximation groups the behaviour of nodes of same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks:...

  3. Estimating seabed scattering mechanisms via Bayesian model selection.

    Science.gov (United States)

    Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan

    2014-10-01

    A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.

  4. Binocular rivalry waves in a directionally selective neural field model

    Science.gov (United States)

    Carroll, Samuel R.; Bressloff, Paul C.

    2014-10-01

    We extend a neural field model of binocular rivalry waves in the visual cortex to incorporate direction selectivity of moving stimuli. For each eye, we consider a one-dimensional network of neurons that respond maximally to a fixed orientation and speed of a grating stimulus. Recurrent connections within each one-dimensional network are taken to be excitatory and asymmetric, where the asymmetry captures the direction and speed of the moving stimuli. Connections between the two networks are taken to be inhibitory (cross-inhibition). As per previous studies, we incorporate slow adaption as a symmetry breaking mechanism that allows waves to propagate. We derive an analytical expression for traveling wave solutions of the neural field equations, as well as an implicit equation for the wave speed as a function of neurophysiological parameters, and analyze their stability. Most importantly, we show that propagation of traveling waves is faster in the direction of stimulus motion than against it, which is in agreement with previous experimental and computational studies.

  5. Modeling neuron selectivity over simple midlevel features for image classification.

    Science.gov (United States)

    Shu Kong; Zhuolin Jiang; Qiang Yang

    2015-08-01

    We now know that good mid-level features can greatly enhance the performance of image classification, but how to efficiently learn the image features is still an open question. In this paper, we present an efficient unsupervised midlevel feature learning approach (MidFea), which only involves simple operations, such as k-means clustering, convolution, pooling, vector quantization, and random projection. We show this simple feature can also achieve good performance in traditional classification task. To further boost the performance, we model the neuron selectivity (NS) principle by building an additional layer over the midlevel features prior to the classifier. The NS-layer learns category-specific neurons in a supervised manner with both bottom-up inference and top-down analysis, and thus supports fast inference for a query image. Through extensive experiments, we demonstrate that this higher level NS-layer notably improves the classification accuracy with our simple MidFea, achieving comparable performances for face recognition, gender classification, age estimation, and object categorization. In particular, our approach runs faster in inference by an order of magnitude than sparse coding-based feature learning methods. As a conclusion, we argue that not only do carefully learned features (MidFea) bring improved performance, but also a sophisticated mechanism (NS-layer) at higher level boosts the performance further.

  6. 5-HTP hypothesis of schizophrenia.

    Science.gov (United States)

    Fukuda, K

    2014-01-01

    To pose a new hypothesis of schizophrenia that affirms and unifies conventional hypotheses. Outside the brain, there are 5-HTP-containing argyrophil cells that have tryptophan hydroxylase 1 without l-aromatic amino acid decarboxylase. Monoamine oxidase in the liver and lung metabolize 5-HT, rather than 5-HTP, and 5-HTP freely crosses the blood-brain barrier, converting to 5-HT in the brain. Therefore I postulate that hyperfunction of 5-HTP-containing argyrophil cells may be a cause of schizophrenia. I investigate the consistency of this hypothesis with other hypotheses using a deductive method. Overactive 5-HTP-containing argyrophil cells produce excess amounts of 5-HTP. Abundant 5-HTP increases 5-HT within the brain (linking to the 5-HT hypothesis), and leads to negative feedback of 5-HT synthesis at the rate-limiting step catalysed by tryptophan hydroxylase 2. Owing to this negative feedback, brain tryptophan is further metabolized via the kynurenine pathway. Increased kynurenic acid contributes to deficiencies of glutamate function and dopamine activity, known causes of schizophrenia. The 5-HTP hypothesis affirms conventional hypotheses, as the metabolic condition caused by acceleration of tryptophan hydroxylase 1 and suppression of tryptophan hydroxylase 2, activates both 5-HT and kynurenic acid. In order to empirically test the theory, it will be useful to monitor serum 5-HTP and match it to different phases of schizophrenia. This hypothesis may signal a new era with schizophrenia treated as a brain-gut interaction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Variable Selection for Generalized Varying Coefficient Partially Linear Models with Diverging Number of Parameters

    Institute of Scientific and Technical Information of China (English)

    Zheng-yan Lin; Yu-ze Yuan

    2012-01-01

    Semiparametric models with diverging number of predictors arise in many contemporary scientific areas. Variable selection for these models consists of two components: model selection for non-parametric components and selection of significant variables for the parametric portion.In this paper,we consider a variable selection procedure by combining basis function approximation with SCAD penalty.The proposed procedure simultaneously selects significant variables in the parametric components and the nonparametric components.With appropriate selection of tuning parameters,we establish the consistency and sparseness of this procedure.

  8. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  9. A hidden Markov model to identify and adjust for selection bias: an example involving mixed migration strategies.

    Science.gov (United States)

    Fieberg, John R; Conn, Paul B

    2014-05-01

    An important assumption in observational studies is that sampled individuals are representative of some larger study population. Yet, this assumption is often unrealistic. Notable examples include online public-opinion polls, publication biases associated with statistically significant results, and in ecology, telemetry studies with significant habitat-induced probabilities of missed locations. This problem can be overcome by modeling selection probabilities simultaneously with other predictor-response relationships or by weighting observations by inverse selection probabilities. We illustrate the problem and a solution when modeling mixed migration strategies of northern white-tailed deer (Odocoileus virginianus). Captures occur on winter yards where deer migrate in response to changing environmental conditions. Yet, not all deer migrate in all years, and captures during mild years are more likely to target deer that migrate every year (i.e., obligate migrators). Characterizing deer as conditional or obligate migrators is also challenging unless deer are observed for many years and under a variety of winter conditions. We developed a hidden Markov model where the probability of capture depends on each individual's migration strategy (conditional versus obligate migrator), a partially latent variable that depends on winter severity in the year of capture. In a 15-year study, involving 168 white-tailed deer, the estimated probability of migrating for conditional migrators increased nonlinearly with an index of winter severity. We estimated a higher proportion of obligates in the study cohort than in the population, except during a span of 3 years surrounding back-to-back severe winters. These results support the hypothesis that selection biases occur as a result of capturing deer on winter yards, with the magnitude of bias depending on the severity of winter weather. Hidden Markov models offer an attractive framework for addressing selection biases due to their

  10. Generalized Hypergeometric Ensembles: Statistical Hypothesis Testing in Complex Networks

    CERN Document Server

    Casiraghi, Giona; Scholtes, Ingo; Schweitzer, Frank

    2016-01-01

    Statistical ensembles define probability spaces of all networks consistent with given aggregate statistics and have become instrumental in the analysis of relational data on networked systems. Their numerical and analytical study provides the foundation for the inference of topological patterns, the definition of network-analytic measures, as well as for model selection and statistical hypothesis testing. Contributing to the foundation of these important data science techniques, in this article we introduce generalized hypergeometric ensembles, a framework of analytically tractable statistical ensembles of finite, directed and weighted networks. This framework can be interpreted as a generalization of the classical configuration model, which is commonly used to randomly generate networks with a given degree sequence or distribution. Our generalization rests on the introduction of dyadic link propensities, which capture the degree-corrected tendencies of pairs of nodes to form edges between each other. Studyin...

  11. Model selection and assessment for multi­-species occupancy models

    Science.gov (United States)

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  12. Fuzzy Programming Models for Vendor Selection Problem in a Supply Chain

    Institute of Scientific and Technical Information of China (English)

    WANG Junyan; ZHAO Ruiqing; TANG Wansheng

    2008-01-01

    This paper characterizes quality, budget, and demand as fuzzy variables in a fuzzy vendor selec-tion expected value model and a fuzzy vendor selection chance-constrained programming model, to maxi-mize the total quality level. The two models have distinct advantages over existing methods for selecting vendors in fuzzy environments. A genetic algorithm based on fuzzy simulations is designed to solve these two models. Numerical examples show the effectiveness of the algorithm.

  13. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  14. Continuous time limits of the utterance selection model

    Science.gov (United States)

    Michaud, Jérôme

    2017-02-01

    In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.

  15. Bioenergetic modeling reveals that Chinese green tree vipers select postprandial temperatures in laboratory thermal gradients that maximize net energy intake.

    Science.gov (United States)

    Tsai, Tein-Shun; Lee, How-Jing; Tu, Ming-Chung

    2009-11-01

    With bioenergetic modeling, we tested the hypothesis that reptiles maximize net energy gain by postprandial thermal selection. Previous studies have shown that Chinese green tree vipers (Trimeresurus s. stejnegeri) have postprandial thermophily (mean preferred temperature T(p) for males =27.8 degrees C) in a linear thigmothermal gradient when seclusion sites and water existed. With some published empirical models of digestion associated factors for this snake, we calculated the average rate (E(net)) and efficiency (K(net)) of net energy gain from possible combinations of meal size, activity level, and feeding frequency at each temperature. The simulations consistently revealed that E(net) maximizes at the T(p) of these snakes. Although the K(net) peaks at a lower temperature than E(net), the value of K(net) remains high (>=0.85 in ratio to maximum) at the peak temperature of E(net). This suggested that the demands of both E(net) and K(net) can be attained by postprandial thermal selection in this snake. In conclusion, the data support our prediction that postprandial thermal selection may maximize net energy gain.

  16. Sequential hypothesis testing with spatially correlated presence-absence data.

    Science.gov (United States)

    DePalma, Elijah; Jeske, Daniel R; Lara, Jesus R; Hoddle, Mark

    2012-06-01

    A pest management decision to initiate a control treatment depends upon an accurate estimate of mean pest density. Presence-absence sampling plans significantly reduce sampling efforts to make treatment decisions by using the proportion of infested leaves to estimate mean pest density in lieu of counting individual pests. The use of sequential hypothesis testing procedures can significantly reduce the number of samples required to make a treatment decision. Here we construct a mean-proportion relationship for Oligonychus perseae Tuttle, Baker, and Abatiello, a mite pest of avocados, from empirical data, and develop a sequential presence-absence sampling plan using Bartlett's sequential test procedure. Bartlett's test can accommodate pest population models that contain nuisance parameters that are not of primary interest. However, it requires that population measurements be independent, which may not be realistic because of spatial correlation of pest densities across trees within an orchard. We propose to mitigate the effect of spatial correlation in a sequential sampling procedure by using a tree-selection rule (i.e., maximin) that sequentially selects each newly sampled tree to be maximally spaced from all other previously sampled trees. Our proposed presence-absence sampling methodology applies Bartlett's test to a hypothesis test developed using an empirical mean-proportion relationship coupled with a spatial, statistical model of pest populations, with spatial correlation mitigated via the aforementioned tree-selection rule. We demonstrate the effectiveness of our proposed methodology over a range of parameter estimates appropriate for densities of O. perseae that would be observed in avocado orchards in California.

  17. Domestic Distortions and the Deindustrialization Hypothesis

    OpenAIRE

    Paul Krugman

    1996-01-01

    It is widely believed that U.S. trade deficits have displaced workers from highly paid manufacturing jobs into less well-paid service employment, contributing to declining incomes for the nation as a whole. Although proponents of this view do not usually think of it this way, this analysis falls squarely into the `domestic distortions' framework pioneered by Jagdish Bhagwati. This paper models the deindustrialization hypothesis explicitly as a domestic distortions issue, and shows that while ...

  18. Counselor Hypothesis-Testing Strategies.

    Science.gov (United States)

    Strohmer, Douglas C.; Newman, Lisa J.

    1983-01-01

    Reports two experiments relevant to the questioning strategies counselors use in testing their hypotheses about clients. Results supported the idea that counselors are able to take a tentative hypothesis about a client and test its accuracy against additional independent, unbiased observations of the client. (LLL)

  19. Fourier power, subjective distance and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Directory of Open Access Journals (Sweden)

    Mark Daniel Lescroart

    2015-11-01

    Full Text Available Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA, Retrosplenial Complex (RSC, and the Occipital Place Area (OPA. It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1 2D features related to Fourier power; (2 3D spatial features such as the distance to objects in a scene; or (3 abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM to BOLD fMRI responses elicited by a set of 1,386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.

  20. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.|info:eu-repo/dai/nl/290472113

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  1. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  2. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change impa

  3. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change impa

  4. Bond selection in the photoisomerization reaction of anionic green fluorescent protein and kindling fluorescent protein chromophore models.

    Science.gov (United States)

    Olsen, Seth; Smith, Sean C

    2008-07-09

    The chromophores of the most widely known fluorescent proteins (FPs) are derivatives of a core p-hydroxybenzylidene-imidazolinon-5-one (HBI) motif, which usually occurs as a phenolate anion. Double bond photoisomerization of the exocyclic bridge of HBI is widely held to be an important internal conversion mechanism for FP chromophores. Herein we describe the ground and excited-state electronic structures and potential energy surfaces of two model chromophores: 4- p-hydroxybenzylidiene-1,2-dimethyl-imidazolin-5-one anion (HBDI), representing green FPs (GFPs), and 2-acetyl-4-hydroxybenylidene-1-methyl-imidazolin-5-one anion (AHBMI), representing kindling FPs (KFPs). These chromophores differ by a single substitution, but we observe qualitative differences in the potential energy surfaces which indicate inversion of bond selection in the photoisomerization reaction. Bond selection is also modulated by whether the reaction proceeds from a Z or an E conformation. These configurations correspond to fluorescent and nonfluorescent states of structurally characterized FPs, including some which can be reversibly switched by specific illumination regimes. We explain the difference in bond selectivity via substituent stabilization effects on a common set of charge-localized chemical structures. Different combinations of these structures give rise to both optically active (planar) and twisted intramolecular charge-transfer (TICT) states of the molecules. We offer a prediction of the gas-phase absorption of AHBMI, which has not yet been measured. We offer a hypothesis to explain the unusual fluorescence of AHBMI in DMF solution, as well as an experimental proposal to test our hypothesis.

  5. Modelling transport of chokka squid (Loligo reynaudii) paralarvae off South Africa: reviewing, testing and extending the ‘Westward Transport Hypothesis'

    CSIR Research Space (South Africa)

    Martins, RS

    2013-08-01

    Full Text Available to variability in transport of newly hatched paralarvae from spawning grounds to the ‘cold ridge’ nursery region some 100–200 km to the west, where oceanographic conditions sustain high productivity. We used an individual-based model (IBM) coupled with a 3-D...

  6. Forecasting macroeconomic variables using neural network models and three automated model selection techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2016-01-01

    When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. To alleviate the problem, White (2006) presented a solution (Quick......Net) that converts the specification and nonlinear estimation problem into a linear model selection and estimation problem. We shall compare its performance to that of two other procedures building on the linearization idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting...

  7. Integrating Platform Selection Rules in the Model-Driven Architecture Approach

    NARCIS (Netherlands)

    Tekinerdogan, B.; Bilir, S.; Abatlevi, C.; Assmann, U.; Aksit, M.; Rensink, A.

    2005-01-01

    A key issue in the MDA approach is the transformation of platform independent models to platform specific models. Before transforming to a platform specific model, however, it is necessary to select the appropriate platform. Various platforms exist with different properties and the selection of the

  8. 基于输入-输出理论的“新闻播报法”听力学习模式%"News Broadcasting" English Listening Improvement ;Model Based on Input-Output Hypothesis

    Institute of Scientific and Technical Information of China (English)

    温纯

    2015-01-01

    Restricted by limited vocabulary size, poor pronuncia-tion and intonation, weak lingual sensitivity, unfamiliarity with the topic and other factors, second-year English major students in China are often encountered with great difficulty in improving their English listening. This paper introduces a listening im-provement model,"News Broadcasting", based on Krashen's In-put Hypothesis and Swain's Output Hypothesis. In this model, students are expected to make extensive efforts in listening, writ-ing, intensive reading and reciting, effectively bringing students' lingual competence to a higher level quickly.%英语专业低年级学生经常受制于词汇量有限、辨音能力差、语感弱、话题不熟悉等因素的影响,英语听力难以提高。本文介绍一种基于Krashen的语言输入假设和Swain的语言输出理论的听力学习模式———“新闻播报法”,通过听、写、校、读四个环节对新闻材料进行高强度的研习,突破听力的各个瓶颈,使学生各方面的语言能力迅速得到提高。

  9. Selected Constitutive Models for Simulating the Hygromechanical Response of Wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund

    -phase transport model. In this paper a so-called multi-Fickian model is revised with respect to the incorporated essential sorption rate model. Based on existing experimental results the sorption rate model is studied. A desorption rate model analogous to the adsorption rate model is proposed. Furthermore......, the boundary conditions are discussed based on discrepancies found for similar research on moisture transport in paper stacks. Paper III: A new sorption hysteresis model suitable for implementation into a numerical method is developed. The prevailing so-called scanning curves are modeled by closed...... in paper III is applied to two different wood species and to bleach-kraft paperboard. Paper V: The sorption hysteresis model is implemented into the multi-Fickian model allowing simultaneous simulation of non-Fickian effects and hysteresis. A key point for this implementation is definition of the condition...

  10. INVESTIGATING THE "COMPLEMENTARITY HYPOTHESIS" IN GREEK AGRICULTURE: AN EMPIRICAL ANALYSIS

    OpenAIRE

    Katrakilidis, Constantinos P.; Tabakis, Nikolaos M.

    2001-01-01

    This study investigates determinants of private capital formation in Greek agriculture and tests the "complementarity" against the "crowding out" hypothesis using multivariate cointegration techniques and ECVAR modeling in conjunction with variance decomposition and impulse response analysis. The results provide evidence of a significant positive causal effect of government spending on private capital formation, thus supporting the "complementarity" hypothesis for Greek agriculture.

  11. Dose-Response Modeling Under Simple Order Restrictions Using Bayesian Variable Selection Methods

    OpenAIRE

    Otava, Martin; Shkedy, Ziv; Lin, Dan; Goehlmann, Hinrich W. H.; Bijnens, Luc; Talloen, Willem; Kasim, Adetayo

    2014-01-01

    Bayesian modeling of dose–response data offers the possibility to establish the relationship between a clinical or a genomic response and increasing doses of a therapeutic compound and to determine the nature of the relationship wherever it exists. In this article, we focus on an order-restricted one-way ANOVA model which can be used to test the null hypothesis of no dose effect against an ordered alternative. Within the framework of the dose–response modeling, a model uncertainty can be addr...

  12. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  13. [Selection of biomass estimation models for Chinese fir plantation].

    Science.gov (United States)

    Li, Yan; Zhang, Jian-guo; Duan, Ai-guo; Xiang, Cong-wei

    2010-12-01

    A total of 11 kinds of biomass models were adopted to estimate the biomass of single tree and its organs in young (7-year-old), middle-age (16-year-old), mature (28-year-old), and mixed-age Chinese fir plantations. There were totally 308 biomass models fitted. Among the 11 kinds of biomass models, power function models fitted best, followed by exponential models, and then polynomial models. Twenty-one optimal biomass models for individual organ and single tree were chosen, including 18 models for individual organ and 3 models for single tree. There were 7 optimal biomass models for the single tree in the mixed-age plantation, containing 6 for individual organ and 1 for single tree, and all in the form of power function. The optimal biomass models for the single tree in different age plantations had poor generality, but the ones for that in mixed-age plantation had a certain generality with high accuracy, which could be used for estimating the biomass of single tree in different age plantations. The optimal biomass models for single Chinese fir tree in Shaowu of Fujin Province were used to predict the single tree biomass in mature (28-year-old) Chinese fir plantation in Jiangxi Province, and it was found that the models based on a large sample of forest biomass had a relatively high accuracy, being able to be applied in large area, whereas the regional models with small sample were limited to small area.

  14. Lipofuscin hypothesis of Alzheimer's disease.

    Science.gov (United States)

    Giaccone, Giorgio; Orsi, Laura; Cupidi, Chiara; Tagliavini, Fabrizio

    2011-01-01

    The primary culprit responsible for Alzheimer's disease (AD) remains unknown. Aβ protein has been identified as the main component of amyloid of senile plaques, the hallmark lesion of AD, but it is not definitively established whether the formation of extracellular Aβ deposits is the absolute harbinger of the series of pathological events that hit the brain in the course of sporadic AD. The aim of this paper is to draw attention to a relatively overlooked age-related product, lipofuscin, and advance the hypothesis that its release into the extracellular space following the death of neurons may substantially contribute to the formation of senile plaques. The presence of intraneuronal Aβ, similarities between AD and age-related macular degeneration, and the possible explanation of some of the unknown issues in AD suggest that this hypothesis should not be discarded out of hand.

  15. Testing competing forms of the Milankovitch hypothesis

    DEFF Research Database (Denmark)

    Kaufmann, Robert K.; Juselius, Katarina

    2016-01-01

    We test competing forms of the Milankovitch hypothesis by estimating the coefficients and diagnostic statistics for a cointegrated vector autoregressive model that includes 10 climate variables and four exogenous variables for solar insolation. The estimates are consistent with the physical...... mechanisms postulated to drive glacial cycles. They show that the climate variables are driven partly by solar insolation, determining the timing and magnitude of glaciations and terminations, and partly by internal feedback dynamics, pushing the climate variables away from equilibrium. We argue...... that the latter is consistent with a weak form of the Milankovitch hypothesis and that it should be restated as follows: Internal climate dynamics impose perturbations on glacial cycles that are driven by solar insolation. Our results show that these perturbations are likely caused by slow adjustment between land...

  16. Exploring heterogeneous market hypothesis using realized volatility

    Science.gov (United States)

    Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari

    2013-04-01

    This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.

  17. A Molecular–Structure Hypothesis

    OpenAIRE

    Jan C. A. Boeyens

    2010-01-01

    The self-similar symmetry that occurs between atomic nuclei, biological growth structures, the solar system, globular clusters and spiral galaxies suggests that a similar pattern should characterize atomic and molecular structures. This possibility is explored in terms of the current molecular structure-hypothesis and its extension into four-dimensional space-time. It is concluded that a quantum molecule only has structure in four dimensions and that classical (Newtonian) structure, which occ...

  18. Cortical sensorimotor integration: a hypothesis.

    Science.gov (United States)

    Batuev, A S

    1989-01-01

    A hypothesis is proposed that neocortex is constructed from structural neuronal modules (columns and rings). Each module is considered as unit for cortical sensorimotor integration. Complex functional relationships between modules can be arranged by intracortical inhibition participation. High pronounced neocortical plasticity ensures the process of continuous formation of various dominating operative constellations comprising stable neuronal modules whose component structure and distributive characteristic are determined by the dominant motivation and the central motor program.

  19. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    Science.gov (United States)

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis.

  20. Hypothesis Formation, Paradigms, and Openness

    Directory of Open Access Journals (Sweden)

    Conrad P. Pritscher

    2008-01-01

    Full Text Available A part of hypothesis formation, while necessary for scientific investigation, is beyond direct observation. Powerful hypothesis formation is more than logical and is facilitated by mind­opening. As Percy Bridgeman, Nobel laureate, said, science is: “Nothing more than doing one's damnedest with one's mind, no holds barred.” This paper suggests more open schooling helps generate more open hypothesizing which helps one do one's damnedest with one's mind. It is hypothesized that a more open process of hypothesis formation may help schools and society forge new ways of living and learning so that more people more often can do their damnedest with their mind. This writing does not offer a new paradigm but rather attempts to elaborate on the notion that new paradigms are difficult to form without openness to what was previously quasi­unthinkable. More on these topics and issues is included in the author's Reopening Einstein's Thought: About What Can't Be Learned From Textbooks ­­to be published by Sense Publishers in June 2008.