WorldWideScience

Sample records for model selection hypothesis

  1. Hierarchical models in ecology: confidence intervals, hypothesis testing, and model selection using data cloning.

    Science.gov (United States)

    Ponciano, José Miguel; Taper, Mark L; Dennis, Brian; Lele, Subhash R

    2009-02-01

    Hierarchical statistical models are increasingly being used to describe complex ecological processes. The data cloning (DC) method is a new general technique that uses Markov chain Monte Carlo (MCMC) algorithms to compute maximum likelihood (ML) estimates along with their asymptotic variance estimates for hierarchical models. Despite its generality, the method has two inferential limitations. First, it only provides Wald-type confidence intervals, known to be inaccurate in small samples. Second, it only yields ML parameter estimates, but not the maximized likelihood values used for profile likelihood intervals, likelihood ratio hypothesis tests, and information-theoretic model selection. Here we describe how to overcome these inferential limitations with a computationally efficient method for calculating likelihood ratios via data cloning. The ability to calculate likelihood ratios allows one to do hypothesis tests, construct accurate confidence intervals and undertake information-based model selection with hierarchical models in a frequentist context. To demonstrate the use of these tools with complex ecological models, we reanalyze part of Gause's classic Paramecium data with state-space population models containing both environmental noise and sampling error. The analysis results include improved confidence intervals for parameters, a hypothesis test of laboratory replication, and a comparison of the Beverton-Holt and the Ricker growth forms based on a model selection index.

  2. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  3. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  4. Socioeconomic inequality in health in the British household panel: Tests of the social causation, health selection and the indirect selection hypothesis using dynamic fixed effects panel models.

    Science.gov (United States)

    Foverskov, Else; Holm, Anders

    2016-02-01

    Despite social inequality in health being well documented, it is still debated which causal mechanism best explains the negative association between socioeconomic position (SEP) and health. This paper is concerned with testing the explanatory power of three widely proposed causal explanations for social inequality in health in adulthood: the social causation hypothesis (SEP determines health), the health selection hypothesis (health determines SEP) and the indirect selection hypothesis (no causal relationship). We employ dynamic data of respondents aged 30 to 60 from the last nine waves of the British Household Panel Survey. Household income and location on the Cambridge Scale is included as measures of different dimensions of SEP and health is measured as a latent factor score. The causal hypotheses are tested using a time-based Granger approach by estimating dynamic fixed effects panel regression models following the method suggested by Anderson and Hsiao. We propose using this method to estimate the associations over time since it allows one to control for all unobserved time-invariant factors and hence lower the chances of biased estimates due to unobserved heterogeneity. The results showed no proof of the social causation hypothesis over a one to five year period and limited support for the health selection hypothesis was seen only for men in relation to HH income. These findings were robust in multiple sensitivity analysis. We conclude that the indirect selection hypothesis may be the most important in explaining social inequality in health in adulthood, indicating that the well-known cross-sectional correlations between health and SEP in adulthood seem not to be driven by a causal relationship, but instead by dynamics and influences in place before the respondents turn 30 years old that affect both their health and SEP onwards. The conclusion is limited in that we do not consider the effect of specific diseases and causal relationships in adulthood may be

  5. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  6. An Extension to the Constructivist Coding Hypothesis as a Learning Model for Selective Feedback when the Base Rate Is High

    Science.gov (United States)

    Ghaffarzadegan, Navid; Stewart, Thomas R.

    2011-01-01

    Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…

  7. Confluence Model or Resource Dilution Hypothesis?

    DEFF Research Database (Denmark)

    Jæger, Mads

    Studies on family background often explain the negative effect of sibship size on educational attainment by one of two theories: the Confluence Model (CM) or the Resource Dilution Hypothesis (RDH). However, as both theories – for substantively different reasons – predict that sibship size should...... to identify a unique RDH effect on educational attainment. Using sibling data from the Wisconsin Longitudinal Study (WLS) and a random effect Instrumental Variable model, I find that in addition to having a negative effect on cognitive ability, sibship size also has a strong negative effect on educational...

  8. Multi-hypothesis modelling of snowmelt

    Science.gov (United States)

    Essery, R.

    2017-12-01

    Modules to predict the melt of snow on the ground are essential components of hydrological and climatological models. Energy to melt snow can come from shortwave or longwave radiation fluxes, turbulent heat fluxes from the atmosphere, conducted heat fluxes from the ground or advected heat in rain falling on snow. Multiple competing hypotheses (parametrizations) for these fluxes and how they are connected to model state variables are in current use. The multiple sources of energy and limited data to constrain them lead to a great deal of equifinality and difficulties in understanding model behaviour. This presentation will discuss how a multi-hypothesis snow model can be used to understand the complementary, competing and confounding influences of model structural choices, parameter uncertainty and input data errors on simulations of snowmelt.

  9. Is it better to select or to receive? Learning via active and passive hypothesis testing.

    Science.gov (United States)

    Markant, Douglas B; Gureckis, Todd M

    2014-02-01

    People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.

  10. A hypothesis on the selective advantage for sleep

    OpenAIRE

    Tannenbaum, Emmanuel

    2005-01-01

    In this note, we present a hypothesis for the emergence of the phenomenon of sleep in organisms with sufficiently developed central nervous systems. We argue that sleep emerges because individual neurons must periodically enter a resting state and perform various ``garbage collection'' activities. Because the proper functioning of the central nervous system is dependent on the interconnections amongst a large collection of individual neurons, it becomes optimal, from the standpoint of the org...

  11. An efficient coding hypothesis links sparsity and selectivity of neural responses.

    Directory of Open Access Journals (Sweden)

    Florian Blättler

    Full Text Available To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS and few renditions of conspecific birds' songs (CONs. During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses.

  12. Testing the cultural group selection hypothesis in Northern Ghana and Oaxaca.

    Science.gov (United States)

    Acedo-Carmona, Cristina; Gomila, Antoni

    2016-01-01

    We examine the cultural group selection (CGS) hypothesis in light of our fieldwork in Northern Ghana and Oaxaca, highly multi-ethnic regions. Our evidence fails to corroborate two central predictions of the hypothesis: that the cultural group is the unit of evolution, and that cultural homogenization is to be expected as the outcome of a selective process.

  13. On the causes of selection for recombination underlying the red queen hypothesis.

    Science.gov (United States)

    Salathé, Marcel; Kouyos, Roger D; Bonhoeffer, Sebastian

    2009-07-01

    The vast majority of plant and animal species reproduce sexually despite the costs associated with sexual reproduction. Genetic recombination might outweigh these costs if it helps the species escape parasite pressure by creating rare or novel genotypes, an idea known as the Red Queen hypothesis. Selection for recombination can be driven by short- and long-term effects, but the relative importance of these effects and their dependency on the parameters of an antagonistic species interaction remain unclear. We use computer simulations of a mathematical model of host-parasite coevolution to measure those effects under a wide range of parameters. We find that the real driving force underlying the Red Queen hypothesis is neither the immediate, next-generation, short-term effect nor the long-term effect but in fact a delayed short-term effect. Our results highlight the importance of differentiating clearly between immediate and delayed short-term effects when attempting to elucidate the mechanism underlying selection for recombination in the Red Queen hypothesis.

  14. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  15. Interactive comparison of hypothesis tests for statistical model checking

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Reijsbergen, D.P.; Scheinhardt, Willem R.W.

    2015-01-01

    We present a web-based interactive comparison of hypothesis tests as are used in statistical model checking, providing users and tool developers with more insight into their characteristics. Parameters can be modified easily and their influence is visualized in real time; an integrated simulation

  16. The linear model and hypothesis a general unifying theory

    CERN Document Server

    Seber, George

    2015-01-01

    This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory. The approach used is a geometrical one based on the concept of projections and their associated idempotent matrices, thus largely avoiding the need to involve matrix ranks. It is shown that all the hypotheses encountered are either linear or asymptotically linear, and that all the underlying models used are either exactly or asymptotically linear normal models. This equivalence can be used, for example, to extend the concept of orthogonality in the analysis of variance to other models, and to show that the asymptotic equivalence of the likelihood ratio, Wald, and Score (Lagrange Multiplier) hypothesis tests generally applies.

  17. Simultaneity modeling analysis of the environmental Kuznets curve hypothesis

    International Nuclear Information System (INIS)

    Ben Youssef, Adel; Hammoudeh, Shawkat; Omri, Anis

    2016-01-01

    The environmental Kuznets curve (EKC) hypothesis has been recognized in the environmental economics literature since the 1990's. Various statistical tests have been used on time series, cross section and panel data related to single and groups of countries to validate this hypothesis. In the literature, the validation has always been conducted by using a single equation. However, since both the environment and income variables are endogenous, the estimation of a single equation model when simultaneity exists produces inconsistent and biased estimates. Therefore, we formulate simultaneous two-equation models to investigate the EKC hypothesis for fifty-six countries, using annual panel data from 1990 to 2012, with the end year is determined by data availability for the panel. To make the panel data analysis more homogeneous, we investigate this issue for a three income-based panels (namely, high-, middle-, and low-income panels) given several explanatory variables. Our results indicate that there exists a bidirectional causality between economic growth and pollution emissions in the overall panels. We also find that the relationship is nonlinear and has an inverted U-shape for all the considered panels. Policy implications are provided. - Highlights: • We have given a new look for the validity of the EKC hypothesis. • We formulate two-simultaneous equation models to validate this hypothesis for fifty-six countries. • We find a bidirectional causality between economic growth and pollution emissions. • We also discover an inverted U-shaped between environmental degradation and economic growth. • This relationship varies at different stages of economic development.

  18. Human skin-color sexual dimorphism: a test of the sexual selection hypothesis.

    Science.gov (United States)

    Madrigal, Lorena; Kelly, William

    2007-03-01

    Applied to skin color, the sexual selection hypothesis proposes that male preference for light-skinned females explains the presence of light skin in areas of low solar radiation. According to this proposal, in areas of high solar radiation, natural selection for dark skin overrides the universal preference of males for light females. But in areas in which natural selection ceases to act, sexual selection becomes more important, and causes human populations to become light-skinned, and females to be lighter than males. The sexual selection hypothesis proposes that human sexual dimorphism of skin color should be positively correlated with distance from the equator. We tested the prediction that sexual dimorphism should increase with increasing latitude, using adult-only data sets derived from measurements with standard reflectance spectrophotometric devices. Our analysis failed to support the prediction of a positive correlation between increasing distance from the equator and increased sexual dimorphism. We found no evidence in support of the sexual selection hypothesis. (c) 2006 Wiley-Liss, Inc.

  19. Habitat heterogeneity hypothesis and edge effects in model metacommunities.

    Science.gov (United States)

    Hamm, Michaela; Drossel, Barbara

    2017-08-07

    Spatial heterogeneity is an inherent property of any living environment and is expected to favour biodiversity due to a broader niche space. Furthermore, edges between different habitats can provide additional possibilities for species coexistence. Using computer simulations, this study examines metacommunities consisting of several trophic levels in heterogeneous environments in order to explore the above hypotheses on a community level. We model heterogeneous landscapes by using two different sized resource pools and evaluate the combined effect of dispersal and heterogeneity on local and regional species diversity. This diversity is obtained by running population dynamics and evaluating the robustness (i.e., the fraction of surviving species). The main results for regional robustness are in agreement with the habitat heterogeneity hypothesis, as the largest robustness is found in heterogeneous systems with intermediate dispersal rates. This robustness is larger than in homogeneous systems with the same total amount of resources. We study the edge effect by arranging the two types of resources in two homogeneous blocks. Different edge responses in diversity are observed, depending on dispersal strength. Local robustness is highest for edge habitats that contain the smaller amount of resource in combination with intermediate dispersal. The results show that dispersal is relevant to correctly identify edge responses on community level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The evolution of plumage polymorphism in birds of prey and owls: the apostatic selection hypothesis revisited.

    Science.gov (United States)

    Fowlie, M K; Krüger, O

    2003-07-01

    Co-evolution between phenotypic variation and other traits is of paramount importance for our understanding of the origin and maintenance of polymorphism in natural populations. We tested whether the evolution of plumage polymorphism in birds of prey and owls was supported by the apostatic selection hypothesis using ecological and life-history variables in birds of prey and owls and performing both cross taxa and independent contrast analyses. For both bird groups, we did not find any support for the apostatic selection hypothesis being the maintaining factor for the polymorphism: plumage polymorphism was not more common in taxa hunting avian or mammalian prey, nor in migratory species. In contrast, we found that polymorphism was related to variables such as sexual plumage dimorphism, population size and range size, as well as breeding altitude and breeding latitude. These results imply that the most likely evolutionary correlate of polymorphism in both bird groups is population size, different plumage morphs might simply arise in larger populations most likely because of a higher probability of mutations and then be maintained by sexual selection.

  1. Sexual orientation in men and avuncularity in Japan: implications for the kin selection hypothesis.

    Science.gov (United States)

    Vasey, Paul L; VanderLaan, Doug P

    2012-02-01

    The kin selection hypothesis for male androphilia posits that genes for male androphilia can be maintained in the population if the fitness costs of not reproducing directly are offset by enhancing inclusive fitness. In theory, androphilic males can increase their inclusive fitness by directing altruistic behavior toward kin, which, in turn, allows kin to increase their reproductive success. Previous research conducted in Western countries (U.S., UK) has failed to find any support for this hypothesis. In contrast, research conducted in Samoa has provided repeated support for it. In light of these cross-cultural differences, we hypothesized that the development of elevated avuncular (i.e., altruistic uncle-like) tendencies in androphilic males may be contingent on a relatively collectivistic cultural context. To test this hypothesis, we compared data on the avuncular tendencies and altruistic tendencies toward non-kin children of childless androphilic and gynephilic men in Japan, a culture that is known to be relatively collectivistic. The results of this study furnished no evidence that androphilic Japanese men exhibited elevated avuncular tendencies compared to their gynephilic counterparts. Moreover, there was no evidence that androphilic men's avuncular tendencies were more optimally designed (i.e., were more dissociated from their altruistic tendencies toward non-kin children) compared to gynephilic men. If an adaptively designed avuncular male androphilic phenotype exists and its development is contingent on a particular social environment, then the research presented here suggests that a collectivistic cultural context is insufficient, in and of itself, for the expression of such a phenotype.

  2. Reproductive Contributions of Cardinals Are Consistent with a Hypothesis of Relaxed Selection in Urban Landscapes

    Directory of Open Access Journals (Sweden)

    Amanda D. Rodewald

    2017-07-01

    Full Text Available Human activities are leading to rapid environmental change globally and may affect the eco-evolutionary dynamics of species inhabiting human-dominated landscapes. Theory suggests that increases in environmental heterogeneity should promote variation in reproductive performance among individuals. At the same time, we know that novel environments, such as our urbanizing study system, may represent more benign or predictable environments due to resource subsidies and ecological changes. We tested the hypothesis that reduced environmental heterogeneity and enhanced resource availability in cities relax selective pressures on birds by testing if urban females vary less than rural females in their demographic contributions to local populations. From 2004 to 2014, we monitored local population densities and annual reproductive output of 470 female Northern Cardinals (Cardinalis cardinalis breeding at 14 forested sites distributed across a rural-to-urban landscape gradient in Ohio, USA. Reproductive contribution was measured as the difference between individual and site-averaged annual reproductive output across all nesting attempts, divided by the annual density at each site. We show that among-individual variation in reproductive contribution to the next year's population declined with increasing urbanization, despite similar variability in body condition across the rural-urban gradient. Thus, female cardinals that bred in urban habitats within our study area were more similar in their contribution to the next generation than rural breeders, where a pattern of winners and losers was more evident. Within-individual variation in annual reproductive contribution also declined with increasing urbanization, indicating that performance of females was also more consistent among years in urban than rural landscapes. These findings are consistent with the hypothesis that urbanized environments offer more homogeneous or predictable conditions that may buffer

  3. Ethnic variability in adiposity and cardiovascular risk: the variable disease selection hypothesis.

    Science.gov (United States)

    Wells, Jonathan C K

    2009-02-01

    Evidence increasingly suggests that ethnic differences in cardiovascular risk are partly mediated by adipose tissue biology, which refers to the regional distribution of adipose tissue and its differential metabolic activity. This paper proposes a novel evolutionary hypothesis for ethnic genetic variability in adipose tissue biology. Whereas medical interest focuses on the harmful effect of excess fat, the value of adipose tissue is greatest during chronic energy insufficiency. Following Neel's influential paper on the thrifty genotype, proposed to have been favoured by exposure to cycles of feast and famine, much effort has been devoted to searching for genetic markers of 'thrifty metabolism'. However, whether famine-induced starvation was the primary selective pressure on adipose tissue biology has been questioned, while the notion that fat primarily represents a buffer against starvation appears inconsistent with historical records of mortality during famines. This paper reviews evidence for the role played by adipose tissue in immune function and proposes that adipose tissue biology responds to selective pressures acting through infectious disease. Different diseases activate the immune system in different ways and induce different metabolic costs. It is hypothesized that exposure to different infectious disease burdens has favoured ethnic genetic variability in the anatomical location of, and metabolic profile of, adipose tissue depots.

  4. Red Cell Genetic Markers in Malarial Susceptibility and Selective Advantage Hypothesis

    Directory of Open Access Journals (Sweden)

    RS Balgir

    2014-02-01

    Full Text Available Malaria is still a serious public health challenge in many parts of the world including India. Human genetic susceptibility to malaria varies from individual to individual depending upon the genetic constitution and from region to region based on geo-ecological and climatic conditions. In the present study, intravenous 334 random blood samples of unrelated adult individuals belonging to Mongoloid ethnic stock were taken after informed consent from the endemic localities of Arunachal Pradesh, Assam and Nagaland to find out the relationship between the abnormal hemoglobin and G6PD enzyme deficiency, and susceptibility to malaria. Abnormal hemoglobin E and G6PD enzyme deficiency seem to interact with malarial parasite in such a way that they probably provide decreased susceptibility or inhibitory effect or increased resistance. Genetic alterations in human genome are maintained in the specific population by natural selection to protect the host against the malarial infection. These findings are consistent with those studies which support the notion of selective genetic advantage hypothesis against the malaria infection.

  5. Conceptualizing the Autism Spectrum in Terms of Natural Selection and Behavioral Ecology: The Solitary Forager Hypothesis

    Directory of Open Access Journals (Sweden)

    Jared Edward Reser

    2011-04-01

    Full Text Available This article reviews etiological and comparative evidence supporting the hypothesis that some genes associated with the autism spectrum were naturally selected and represent the adaptive benefits of being cognitively suited for solitary foraging. People on the autism spectrum are conceptualized here as ecologically competent individuals that could have been adept at learning and implementing hunting and gathering skills in the ancestral environment. Upon independence from their mothers, individuals on the autism spectrum may have been psychologically predisposed toward a different life-history strategy, common among mammals and even some primates, to hunt and gather primarily on their own. Many of the behavioral and cognitive tendencies that autistic individuals exhibit are viewed here as adaptations that would have complemented a solitary lifestyle. For example, the obsessive, repetitive and systemizing tendencies in autism, which can be mistakenly applied toward activities such as block stacking today, may have been focused by hunger and thirst toward successful food procurement in the ancestral past. Both solitary mammals and autistic individuals are low on measures of gregariousness, socialization, direct gazing, eye contact, facial expression, facial recognition, emotional engagement, affiliative need and other social behaviors. The evolution of the neurological tendencies in solitary species that predispose them toward being introverted and reclusive may hold important clues for the evolution of the autism spectrum and the natural selection of autism genes. Solitary animals are thought to eschew unnecessary social contact as part of a foraging strategy often due to scarcity and wide dispersal of food in their native environments. It is thought that the human ancestral environment was often nutritionally sparse as well, and this may have driven human parties to periodically disband. Inconsistencies in group size must have led to

  6. A Dual-Process Discrete-Time Survival Analysis Model: Application to the Gateway Drug Hypothesis

    Science.gov (United States)

    Malone, Patrick S.; Lamis, Dorian A.; Masyn, Katherine E.; Northrup, Thomas F.

    2010-01-01

    The gateway drug model is a popular conceptualization of a progression most substance users are hypothesized to follow as they try different legal and illegal drugs. Most forms of the gateway hypothesis are that "softer" drugs lead to "harder," illicit drugs. However, the gateway hypothesis has been notably difficult to…

  7. Analysis of Positive Selection at Single Nucleotide Polymorphisms Associated with Body Mass Index Does Not Support the "Thrifty Gene" Hypothesis.

    Science.gov (United States)

    Wang, Guanlin; Speakman, John R

    2016-10-11

    The "thrifty gene hypothesis" suggests genetic susceptibility to obesity arises because of positive selection for alleles that favored fat deposition and survival during famines. We used public domain data to locate signatures of positive selection based on derived allele frequency, genetic diversity, long haplotypes, and differences between populations at SNPs identified in genome-wide association studies (GWASs) for BMI. We used SNPs near the lactase (LCT), SLC24A5, and SLC45A2 genes as positive controls and 120 randomly selected SNPs as negative controls. We found evidence for positive selection (p positive selection for the protective allele (i.e., for leanness). The widespread absence of signatures of positive selection, combined with selection favoring leanness at some alleles, does not support the suggestion that obesity provided a selective advantage to survive famines, or any other selective advantage. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A Hypothesis-based Approach to Hydrological Model Development: The Case for Flexible Model Structures

    Science.gov (United States)

    Clark, M. P.; Kavetski, D.; Fenicia, F.

    2010-12-01

    Ambiguities in the appropriate representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. This current overabundance of models is symptomatic of insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in quantifying the impact of sub-catchment heterogeneities on the catchment’s hydrological response. In this presentation we advocate the use of flexible modeling frameworks during the development and subsequent refinement of catchment-scale hydrological models. We argue that the ability of flexible modeling frameworks to decompose a model into its constituent hypotheses, necessarily combined with incisive diagnostics to scrutinize these individual hypotheses against observed data, provides hydrologists with a very powerful and systematic approach for improving process representation in models. Flexible models also support a broader coverage of the model hypothesis space and hence facilitate a more comprehensive quantification of the predictive uncertainty associated with system and component non-identifiabilities that plague many model analyses. As part of our discussion of the advantages and limitations of flexible model frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from uncertainty in environmental data, and the challenge of understanding regional differences in dominant hydrological processes. We assess recent progress in these research directions, and how such progress can be exploited within flexible model applications to advance the community’s quest for more scientifically defensible catchment-scale hydrological models.

  9. A mathematical model of weight loss under total starvation: evidence against the thrifty-gene hypothesis

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2013-01-01

    The thrifty-gene hypothesis (TGH posits that the modern genetic predisposition to obesity stems from a historical past where famine selected for genes that promote efficient fat deposition. It has been previously argued that such a scenario is unfeasible because under such strong selection any gene favouring fat deposition would rapidly move to fixation. Hence, we should all be predisposed to obesity: which we are not. The genetic architecture of obesity that has been revealed by genome-wide association studies (GWAS, however, calls into question such an argument. Obesity is caused by mutations in many hundreds (maybe thousands of genes, each with a very minor, independent and additive impact. Selection on such genes would probably be very weak because the individual advantages they would confer would be very small. Hence, the genetic architecture of the epidemic may indeed be compatible with, and hence support, the TGH. To evaluate whether this is correct, it is necessary to know the likely effects of the identified GWAS alleles on survival during starvation. This would allow definition of their advantage in famine conditions, and hence the likely selection pressure for such alleles to have spread over the time course of human evolution. We constructed a mathematical model of weight loss under total starvation using the established principles of energy balance. Using the model, we found that fatter individuals would indeed survive longer and, at a given body weight, females would survive longer than males, when totally starved. An allele causing deposition of an extra 80 g of fat would result in an extension of life under total starvation by about 1.1–1.6% in an individual with 10 kg of fat and by 0.25–0.27% in an individual carrying 32 kg of fat. A mutation causing a per allele effect of 0.25% would become completely fixed in a population with an effective size of 5 million individuals in 6000 selection events. Because there have probably been about 24

  10. A quantitative test of the size efficiency hypothesis by means of a physiologically structured model

    NARCIS (Netherlands)

    Hülsmann, S.; Rinke, K.; Mooij, W.M.

    2005-01-01

    According to the size-efficiency hypothesis (SEH) larger bodied cladocerans are better competitors for food than small bodied species. In environments with fish, however, the higher losses of the large bodied species due to size-selective predation may shift the balance in favor of the small bodied

  11. INTENSITY OF USE HYPOTHESIS: ANALYSIS OF SELECTED ASIAN COUNTRIES WITH STRUCTURAL DIFFERENCES

    Directory of Open Access Journals (Sweden)

    Ismail Oladimeji Soile

    2013-01-01

    Full Text Available Several efforts have been made to estimate the relationship between intensity of metal use and per capita income at different levels with results supporting the hypothesis that metal consumption per unit of GDP initially increases, peak and later decline with rising income per head. This paper estimates the intensity of copper use curves for three Asian countries with different economic structure to show that the I-U hypothesis significantly underplay the influence of economic structure and other technological innovations by its exclusive emphasis on per capital income. The results are in general conformity with the notion that the intensity of material use (I-U is higher for industrial and very low for service based economies. Though the finding is mixed in the agrarian country considered, the paper suggests the need for further research to corroborate this outcome.

  12. A pilot study for the analysis of dream reports using Maslow's need categories: an extension to the emotional selection hypothesis.

    Science.gov (United States)

    Coutts, Richard

    2010-10-01

    The emotional selection hypothesis describes a cyclical process that uses dreams to modify and test select mental schemas. An extension is proposed that further characterizes these schemas as facilitators of human need satisfaction. A pilot study was conducted in which this hypothesis was tested by assigning 100 dream reports (10 randomly selected from 10 dream logs at an online web site) to one or more categories within Maslow's hierarchy of needs. A "match" was declared when at least two of three judges agreed both for category and for whether the identified need was satisfied or thwarted in the dream narrative. The interjudge reliability of the judged needs was good (92% of the reports contained at least one match). The number of needs judged as thwarted did not differ significantly from the number judged as satisfied (48 vs. 52%, respectively). The six "higher" needs (belongingness, esteem, cognitive, aesthetic, self-actualization, and transcendence) were scored significantly more frequently (81%) than were the two lowest or "basic" needs (physiological and safety, 19%). Basic needs were also more likely to be judged as thwarted, while higher needs were more likely to be judged as satisfied. These findings are discussed in the context of Maslow's hierarchy of needs as a framework for investigating theories of dream function, including the emotional selection hypothesis and other contemporary dream theories.

  13. Testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form

    DEFF Research Database (Denmark)

    Péguin-Feissolle, Anne; Strikholm, Birgit; Teräsvirta, Timo

    In this paper we propose a general method for testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form. These tests are based on a Taylor expansion of the nonlinear model around a given point in the sample space. We study the performance of our tests...

  14. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  15. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  16. Sexual selection on land snail shell ornamentation: a hypothesis that may explain shell diversity

    NARCIS (Netherlands)

    Schilthuizen, M.

    2003-01-01

    Background: Many groups of land snails show great interspecific diversity in shell ornamentation, which may include spines on the shell and flanges on the aperture. Such structures have been explained as camouflage or defence, but the possibility that they might be under sexual selection has not

  17. Cost of reproduction in the Queensland fruit fly: Y-model versus lethal protein hypothesis.

    Science.gov (United States)

    Fanson, Benjamin G; Fanson, Kerry V; Taylor, Phillip W

    2012-12-22

    The trade-off between lifespan and reproduction is commonly explained by differential allocation of limited resources. Recent research has shown that the ratio of protein to carbohydrate (P : C) of a fly's diet mediates the lifespan-reproduction trade-off, with higher P : C diets increasing egg production but decreasing lifespan. To test whether this P : C effect is because of changing allocation strategies (Y-model hypothesis) or detrimental effects of protein ingestion on lifespan (lethal protein hypothesis), we measured lifespan and egg production in Queensland fruit flies varying in reproductive status (mated, virgin and sterilized females, virgin males) that were fed one of 18 diets varying in protein and carbohydrate amounts. The Y-model predicts that for sterilized females and for males, which require little protein for reproduction, there will be no effect of P : C ratio on lifespan; the lethal protein hypothesis predicts that the effect of P : C ratio should be similar in all groups. In support of the lethal protein hypothesis, and counter to the Y-model, the P : C ratio of the ingested diets had similar effects for all groups. We conclude that the trade-off between lifespan and reproduction is mediated by the detrimental side-effects of protein ingestion on lifespan.

  18. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  19. Modeling Natural Selection

    Science.gov (United States)

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  20. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  1. On the geometric modeling approach to empirical null distribution estimation for empirical Bayes modeling of multiple hypothesis testing.

    Science.gov (United States)

    Wu, Baolin

    2013-04-01

    We study the geometric modeling approach to estimating the null distribution for the empirical Bayes modeling of multiple hypothesis testing. The commonly used method is a nonparametric approach based on the Poisson regression, which however could be unduly affected by the dependence among test statistics and perform very poorly under strong dependence. In this paper, we explore a finite mixture model based geometric modeling approach to empirical null distribution estimation and multiple hypothesis testing. Through simulations and applications to two public microarray data, we will illustrate its competitive performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Cost of reproduction in the Queensland fruit fly: Y-model versus lethal protein hypothesis

    OpenAIRE

    Fanson, Benjamin G.; Fanson, Kerry V.; Taylor, Phillip W.

    2012-01-01

    The trade-off between lifespan and reproduction is commonly explained by differential allocation of limited resources. Recent research has shown that the ratio of protein to carbohydrate (P : C) of a fly's diet mediates the lifespan–reproduction trade-off, with higher P : C diets increasing egg production but decreasing lifespan. To test whether this P : C effect is because of changing allocation strategies (Y-model hypothesis) or detrimental effects of protein ingestion on lifespan (lethal p...

  3. The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis

    DEFF Research Database (Denmark)

    Debrabant, Birgit

    2017-01-01

    . This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set......MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis...

  4. Habitat fragmentation, vole population fluctuations, and the ROMPA hypothesis: An experimental test using model landscapes.

    Science.gov (United States)

    Batzli, George O

    2016-11-01

    Increased habitat fragmentation leads to smaller size of habitat patches and to greater distance between patches. The ROMPA hypothesis (ratio of optimal to marginal patch area) uniquely links vole population fluctuations to the composition of the landscape. It states that as ROMPA decreases (fragmentation increases), vole population fluctuations will increase (including the tendency to display multi-annual cycles in abundance) because decreased proportions of optimal habitat result in greater population declines and longer recovery time after a harsh season. To date, only comparative observations in the field have supported the hypothesis. This paper reports the results of the first experimental test. I used prairie voles, Microtus ochrogaster, and mowed grassland to create model landscapes with 3 levels of ROMPA (high with 25% mowed, medium with 50% mowed and low with 75% mowed). As ROMPA decreased, distances between patches of favorable habitat (high cover) increased owing to a greater proportion of unfavorable (mowed) habitat. Results from the first year with intensive live trapping indicated that the preconditions for operation of the hypothesis existed (inversely density dependent emigration and, as ROMPA decreased, increased per capita mortality and decreased per capita movement between optimal patches). Nevertheless, contrary to the prediction of the hypothesis that populations in landscapes with high ROMPA should have the lowest variability, 5 years of trapping indicated that variability was lowest with medium ROMPA. The design of field experiments may never be perfect, but these results indicate that the ROMPA hypothesis needs further rigorous testing. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  5. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  6. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  7. Bayesian Model Selection in Geophysics: The evidence

    Science.gov (United States)

    Vrugt, J. A.

    2016-12-01

    Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.

  8. Domain-general biases in spatial localization: Evidence against a distorted body model hypothesis.

    Science.gov (United States)

    Medina, Jared; Duckett, Caitlin

    2017-07-01

    A number of studies have proposed the existence of a distorted body model of the hand. Supporting this hypothesis, judgments of the location of hand landmarks without vision are characterized by consistent distortions-wider knuckle and shorter finger lengths. We examined an alternative hypothesis in which these biases are caused by domain-general mechanisms, in which participants overestimate the distance between consecutive localization judgments that are spatially close. To do so, we examined performance on a landmark localization task with the hand (Experiments 1-3) using a lag-1 analysis. We replicated the widened knuckle judgments in previous studies. Using the lag-1 analysis, we found evidence for a constant overestimation bias along the mediolateral hand axis, such that consecutive stimuli were perceived as farther apart when they were closer (e.g., index-middle knuckle) versus farther (index-pinky) in space. Controlling for this bias, we found no evidence for a distorted body model along the mediolateral hand axis. To examine whether similar widening biases could be found with noncorporeal stimuli, we asked participants to localize remembered dots on a hand-like array (Experiments 4-5). Mean localization judgments were wider than actual along the primary array axis, similar to previous work with hands. As with proprioceptively defined stimuli, we found that this widening was primarily due to a constant overestimation bias. These results provide substantial evidence against a distorted body model hypothesis and support a domain-general model in which responses are biased away from the uncertainty distribution of the previous trial, leading to a constant overestimation bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  10. Growth and renewable energy in Europe: A random effect model with evidence for neutrality hypothesis

    International Nuclear Information System (INIS)

    Menegaki, Angeliki N.

    2011-01-01

    This is an empirical study on the causal relationship between economic growth and renewable energy for 27 European countries in a multivariate panel framework over the period 1997-2007 using a random effect model and including final energy consumption, greenhouse gas emissions and employment as additional independent variables in the model. Empirical results do not confirm causality between renewable energy consumption and GDP, although panel causality tests unfold short-run relationships between renewable energy and greenhouse gas emissions and employment. The estimated cointegration factor refrains from unity, indicating only a weak, if any, relationship between economic growth and renewable energy consumption in Europe, suggesting evidence of the neutrality hypothesis, which can partly be explained by the uneven and insufficient exploitation of renewable energy sources across Europe.

  11. The Variability Hypothesis: The History of a Biological Model of Sex Differences in Intelligence.

    Science.gov (United States)

    Shields, Stephanie A.

    1982-01-01

    Describes the origin and development of the variability hypothesis as applied to the study of social and psychological sex differences. Explores changes in the hypothesis over time, social and scientific factors that fostered its acceptance, and possible parallels between the variability hypothesis and contemporary theories of sex differences.…

  12. Modeling Water Flux through Crops based on the Optimum Water Use Hypothesis

    Science.gov (United States)

    Hosseini, Atefeh; Gayler, Sebastian; Konrad, Wilfried; Streck, Thilo

    2014-05-01

    Vegetation models can be used to predict plants response to altering climate conditions. Stomatal conductance (gs) controls diffusion of CO2 from the atmosphere to the leaf and water loss through transpiration and allows plants to adjust themselves to fluctuating environmental conditions. The hypothesis that stomata adapt optimally to its environment to maximize assimilation (A) for a given amount of water loss through transpiration (E) was introduced by Cowan and Farquhar (1977). This theory provides a framework for modeling the interactions between vegetation dynamics and soil moisture that does not rely on empirical calibration as long as photosynthetic canopy properties and total amount of water available for transpiration are known. The current study introduces a new approach to implement optimization theory of stomatal conductance into a canopy gas exchange model. The adequacy of the new approach was tested in a real case study by comparing predicted diurnal cycles of assimilation and transpiration rates as well as variability of soil moisture with observations at a winter wheat (Triticum aestivum cv.Cubus) field in southwest Germany. For analyzing the impact of soil texture on stomata regulation, three soil types were compared in a drying soil simulation scenario. Soil water balance was calculated from measured precipitation and simulated transpiration using a single bucket model, where the soil within the root zone was assumed to be homogeneous. Since the model focused on fully developed vegetation canopies, soil evaporation is considered negligible. Marginal water use efficiency can be expressed as partial derivative of assimilation with respect to transpiration (δA/δE =Λ). Daily values of Λ were determined using the formalism of Lagrangian multipliers. Potential evapotranspiration (Penman-Monteith) and effective reduction factor of root water uptake under unfavorable soil moisture conditions were used to estimate amounts of plant available water per

  13. Market disruption, cascading effects, and economic recovery:a life-cycle hypothesis model.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.

    2004-11-01

    This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.

  14. Low-energy effective models for two-flavor quantum chromodynamics and the universality hypothesis

    International Nuclear Information System (INIS)

    Grahl, Mara

    2014-01-01

    Our thesis is centered around the question of which order the chiral phase transition of two-flavor QCD is. First of all we outline several general aspects of phase transitions which are of central importance for the understanding of the RG approach towards them. Our focus lies on reviewing the universality hypothesis, a crucial ingredient when it comes to the construction of effective theories for order parameters, the credibility of which often heavily depends on universality arguments. We finish the chapter with an attempt to formulate the latter more precisely than usually done. The next chapter discusses the chiral phase transition from a general point of view. We supplement well-known facts with a detailed discussion of the so-called O(4) conjecture. Thereafter we introduce the nonperturbative method we use, the FRG method. Furthermore, we discuss the relation between effective models for QCD and the underlying fundamental theory making use of the FRG perspective. The next chapter is concerned with a mathematical subject indispensable for our approach towards the study of phase transitions, namely the systematic construction of polynomial invariants characterizing a given symmetry. With this thesis we point out its relevance in the context of high-energy physics. We present a simple, but novel, brute-force algorithm to effectively construct invariants of a given polynomial order. The next chapter is devoted to RG studies of several dimensionally reduced theories which are capable to either predict or to rule out the possible existence of a second-order phase transition. Of main interest for us is the linear sigma model, particularly in presence of the axial anomaly. It turns out that the fixed-point structure of the latter is rather complicated, requiring a deeper understanding of the underlying method and its preconditions. This leads us to a careful analysis of the fixed-point structure of several models, which is of great benefit for our review of the

  15. Voter models with heterozygosity selection

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    2008-01-01

    Roč. 18, č. 1 (2008), s. 59-99 ISSN 1050-5164 R&D Projects: GA ČR GA201/06/1323; GA ČR GA201/07/0237 Institutional research plan: CEZ:AV0Z10750506 Keywords : Heterozygosity selection * rebellious voter model * branching * annihilation * survival * coexistence Subject RIV: BA - General Mathematics Impact factor: 1.285, year: 2008

  16. Modeling evolution of the mind and cultures: emotional Sapir-Whorf hypothesis

    Science.gov (United States)

    Perlovsky, Leonid I.

    2009-05-01

    Evolution of cultures is ultimately determined by mechanisms of the human mind. The paper discusses the mechanisms of evolution of language from primordial undifferentiated animal cries to contemporary conceptual contents. In parallel with differentiation of conceptual contents, the conceptual contents were differentiated from emotional contents of languages. The paper suggests the neural brain mechanisms involved in these processes. Experimental evidence and theoretical arguments are discussed, including mathematical approaches to cognition and language: modeling fields theory, the knowledge instinct, and the dual model connecting language and cognition. Mathematical results are related to cognitive science, linguistics, and psychology. The paper gives an initial mathematical formulation and mean-field equations for the hierarchical dynamics of both the human mind and culture. In the mind heterarchy operation of the knowledge instinct manifests through mechanisms of differentiation and synthesis. The emotional contents of language are related to language grammar. The conclusion is an emotional version of Sapir-Whorf hypothesis. Cultural advantages of "conceptual" pragmatic cultures, in which emotionality of language is diminished and differentiation overtakes synthesis resulting in fast evolution at the price of self doubts and internal crises are compared to those of traditional cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation. Multi-language, multi-ethnic society might combine the benefits of stability and fast differentiation. Unsolved problems and future theoretical and experimental directions are discussed.

  17. Modeling N Cycling during Succession after Forest Disturbance: an Analysis of N Mining and Retention Hypothesis

    Science.gov (United States)

    Zhou, Z.; Ollinger, S. V.; Ouimette, A.; Lovett, G. M.; Fuss, C. B.; Goodale, C. L.

    2017-12-01

    Dissolved inorganic nitrogen losses at the Hubbard Brook Experimental Forest (HBEF), New Hampshire, USA, have declined in recent decades, a pattern that counters expectations based on prevailing theory. An unbalanced ecosystem nitrogen (N) budget implies there is a missing component for N sink. Hypotheses to explain this discrepancy include increasing rates of denitrification and accumulation of N in mineral soil pools following N mining by plants. Here, we conducted a modeling analysis fused with field measurements of N cycling, specifically examining the hypothesis relevant to N mining and retention in mineral soils. We included simplified representations of both mechanisms, N mining and retention, in a revised ecosystem process model, PnET-SOM, to evaluate the dynamics of N cycling during succession after forest disturbance at the HBEF. The predicted N mining during the early succession was regulated by a metric representing a potential demand of extra soil N for large wood growth. The accumulation of nitrate in mineral soil pools was a function of the net aboveground biomass accumulation and soil N availability and parameterized based on field 15N tracer incubation data. The predicted patterns of forest N dynamics were consistent with observations. The addition of the new algorithms also improved the predicted DIN export in stream water with an R squared of 0.35 (Psuccession, and soil retention about 35% at the current forest stage at the HBEF.

  18. Cohesion, Adaptability and Communication: A Test of an Olson Circumplex Model Hypothesis.

    Science.gov (United States)

    Anderson, Stephen A.

    1986-01-01

    Examined the hypothesis that families balanced on the dimensions of cohesion and adaptability would evidence more positive communication skills. In general, the hypothesis was supported. However, the results were more consistent for the cohesion dimension than for adaptability. Also, sex differences were found. (Author/BL)

  19. Elevated Heat Pump hypothesis validation by using satellite data and CMIP5 climate model simulations

    Science.gov (United States)

    Biondi, R.; Cagnazzo, C.; Cairo, F.; Fierli, F.

    2016-12-01

    Air pollution assumes an important role for the health of the south Asian countries population due to the increasing emission of atmospheric pollutants connected to the population growth and industrial development. At the same time the monsoon rainfall trends and patterns have been changed causing serious economic and societal impacts. In this study we have analyzed the link between the aerosols and the monsoon system focusing the attention on a specific mechanism: the Elevated Heat Pump (EHP) hypothesis. According to the EHP the load of dust, organic carbon and black carbon in the pre-monsoon season over the Indo-Gangetic Plain and the foothills of the Himalayas induces enhanced warming in the middle and upper troposphere and changes the convection patterns. As a consequence the rainfall over northern India in late spring and early summer increases and the rainfall in all India in late summer decreases. However, there are still debated conclusions and large uncertainties in this proposed mechanism with ambiguity and uncertainties coming from the lack of real observations and to the consistency of the measurements. By using Historical Natural runs of 3 different Coupled Model Intercomparison Project Phase 5 (CMIP5) models with interactive aerosol loading, we have analysed the variation of precipitation and atmospheric temperature in correspondence to high and low aerosol load years in a time range of 160 years. For deepening the study and validating the model results, we have also included in our analyses the International Satellite Cloud Climatology Project (ISCCP) Deep Convective Tracking Database and the GPS Radio Occultation (RO) measurements. Our preliminary results with models and the two satellite measurements do not show significant evidence of EHP in terms of convection patterns, while the middle and upper troposphere thermal structure is consistent with previous findings.

  20. Application of Multilevel Models to Morphometric Data. Part 1. Linear Models and Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    O. Tsybrovskyy

    2003-01-01

    Full Text Available Morphometric data usually have a hierarchical structure (i.e., cells are nested within patients, which should be taken into consideration in the analysis. In the recent years, special methods of handling hierarchical data, called multilevel models (MM, as well as corresponding software have received considerable development. However, there has been no application of these methods to morphometric data yet. In this paper we report our first experience of analyzing karyometric data by means of MLwiN – a dedicated program for multilevel modeling. Our data were obtained from 34 follicular adenomas and 44 follicular carcinomas of the thyroid. We show examples of fitting and interpreting MM of different complexity, and draw a number of interesting conclusions about the differences in nuclear morphology between follicular thyroid adenomas and carcinomas. We also demonstrate substantial advantages of multilevel models over conventional, single‐level statistics, which have been adopted previously to analyze karyometric data. In addition, some theoretical issues related to MM as well as major statistical software for MM are briefly reviewed.

  1. General hypothesis and shell model for the synthesis of semiconductor nanotubes, including carbon nanotubes

    Science.gov (United States)

    Mohammad, S. Noor

    2010-09-01

    Semiconductor nanotubes, including carbon nanotubes, have vast potential for new technology development. The fundamental physics and growth kinetics of these nanotubes are still obscured. Various models developed to elucidate the growth suffer from limited applicability. An in-depth investigation of the fundamentals of nanotube growth has, therefore, been carried out. For this investigation, various features of nanotube growth, and the role of the foreign element catalytic agent (FECA) in this growth, have been considered. Observed growth anomalies have been analyzed. Based on this analysis, a new shell model and a general hypothesis have been proposed for the growth. The essential element of the shell model is the seed generated from segregation during growth. The seed structure has been defined, and the formation of droplet from this seed has been described. A modified definition of the droplet exhibiting adhesive properties has also been presented. Various characteristics of the droplet, required for alignment and organization of atoms into tubular forms, have been discussed. Employing the shell model, plausible scenarios for the formation of carbon nanotubes, and the variation in the characteristics of these carbon nanotubes have been articulated. The experimental evidences, for example, for the formation of shell around a core, dipole characteristics of the seed, and the existence of nanopores in the seed, have been presented. They appear to justify the validity of the proposed model. The diversities of nanotube characteristics, fundamentals underlying the creation of bamboo-shaped carbon nanotubes, and the impurity generation on the surface of carbon nanotubes have been elucidated. The catalytic action of FECA on growth has been quantified. The applicability of the proposed model to the nanotube growth by a variety of mechanisms has been elaborated. These mechanisms include the vapor-liquid-solid mechanism, the oxide-assisted growth mechanism, the self

  2. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    to examine the null hypothesis that codon usage is due to mutation bias alone, not influenced by natural selection. Application of the test to the mammalian data led to rejection of the null hypothesis in most genes, suggesting that natural selection may be a driving force in the evolution of synonymous......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  3. Molecular phylogeny of selected species of the order Dinophysiales (Dinophyceae) - testing the hypothesis of a Dinophysioid radiation

    DEFF Research Database (Denmark)

    Jensen, Maria Hastrup; Daugbjerg, Niels

    2009-01-01

    additional information on morphology and ecology to these evolutionary lineages. We have for the first time combined morphological information with molecular phylogenies to test the dinophysioid radiation hypothesis in a modern context. Nuclear-encoded LSU rDNA sequences including domains D1-D6 from 27....... The phylogenetic trees furthermore revealed convergent evolution of several morphological characters in the dinophysioids. According to the molecular data, the dinophysioids appeared to have evolved quite differently from the radiation schemes previously hypothesized. Four dinophysioid species had identical LSU r...

  4. Model selection for univariable fractional polynomials.

    Science.gov (United States)

    Royston, Patrick

    2017-07-01

    Since Royston and Altman's 1994 publication ( Journal of the Royal Statistical Society, Series C 43: 429-467), fractional polynomials have steadily gained popularity as a tool for flexible parametric modeling of regression relationships. In this article, I present fp_select, a postestimation tool for fp that allows the user to select a parsimonious fractional polynomial model according to a closed test procedure called the fractional polynomial selection procedure or function selection procedure. I also give a brief introduction to fractional polynomial models and provide examples of using fp and fp_select to select such models with real data.

  5. Consistent Positive Co-Variation between Fluctuating Asymmetry and Sexual Trait Size: A Challenge to the Developmental Instability-Sexual Selection Hypothesis

    Directory of Open Access Journals (Sweden)

    Michal Polak

    2015-06-01

    Full Text Available The developmental instability (DI-sexual selection hypothesis proposes that large size and symmetry in secondary sexual traits are favored by sexual selection because they reveal genetic quality. A critical prediction of this hypothesis is that there should exist negative correlations between trait fluctuating asymmetry (FA and size of condition dependent sexual traits; condition dependent traits should reveal an organism’s overall health and vigor, and be influenced by a multitude of genetic loci. Here, we tested for the predicted negative FA-size correlations in the male sex comb of Drosophila bipectinata. Among field-caught males from five widely separated geographic localities, FA-size correlations were consistently positive, despite evidence that sex comb size is condition dependent. After controlling for trait size, FA was significantly negatively correlated with body size within several populations, indicating that developmental instability in the comb may reveal individual genetic quality. We suggest the possibility that condition dependent traits in some cases tap into independent units of the genome (a restricted set of genes, rather than signaling overall genetic properties of the organism. There were pronounced among-population differences in both comb FA and size, and these traits were positively correlated across populations, recapitulating the within-population patterns. We conclude that the results are inconsistent with the DI-sexual selection hypothesis, and discuss potential reasons for positive FA-size co-variation in sexual traits.

  6. A Hypothesis and Review of the Relationship between Selection for Improved Production Efficiency, Coping Behavior, and Domestication

    Directory of Open Access Journals (Sweden)

    Wendy M. Rauw

    2017-09-01

    Full Text Available Coping styles in response to stressors have been described both in humans and in other animal species. Because coping styles are directly related to individual fitness they are part of the life history strategy. Behavioral styles trade off with other life-history traits through the acquisition and allocation of resources. Domestication and subsequent artificial selection for production traits specifically focused on selection of individuals with energy sparing mechanisms for non-production traits. Domestication resulted in animals with low levels of aggression and activity, and a low hypothalamic–pituitary–adrenal (HPA axis reactivity. In the present work, we propose that, vice versa, selection for improved production efficiency may to some extent continue to favor docile domesticated phenotypes. It is hypothesized that both domestication and selection for improved production efficiency may result in the selection of reactive style animals. Both domesticated and reactive style animals are characterized by low levels of aggression and activity, and increased serotonin neurotransmitter levels. However, whereas domestication quite consistently results in a decrease in the functional state of the HPA axis, the reactive coping style is often found to be dominated by a high HPA response. This may suggest that fearfulness and coping behavior are two independent underlying dimensions to the coping response. Although it is generally proposed that animal welfare improves with selection for calmer animals that are less fearful and reactive to novelty, animals bred to be less sensitive with fewer desires may be undesirable from an ethical point of view.

  7. Neurospora and the dead-end hypothesis: genomic consequences of selfing in the model genus.

    Science.gov (United States)

    Gioti, Anastasia; Stajich, Jason E; Johannesson, Hanna

    2013-12-01

    It is becoming increasingly evident that adoption of different reproductive strategies, such as sexual selfing and asexuality, greatly impacts genome evolution. In this study, we test theoretical predictions on genomic maladaptation of selfing lineages using empirical data from the model fungus Neurospora. We sequenced the genomes of four species representing distinct transitions to selfing within the history of the genus, as well as the transcriptome of one of these, and compared with available data from three outcrossing species. Our results provide evidence for a relaxation of purifying selection in protein-coding genes and for a reduced efficiency of transposable element silencing by Repeat Induced Point mutation. A reduction in adaptive evolution was also identified in the form of reduced codon usage bias in highly expressed genes of selfing Neurospora, but this result may be confounded by mutational bias. Potentially counteracting these negative effects, the nucleotide substitution rate and the spread of transposons is reduced in selfing species. We suggest that differences in substitution rate relate to the absence, in selfing Neurospora, of the asexual pathway producing conidia. Our results support the dead-end theory and show that Neurospora genomes bear signatures of both sexual and asexual reproductive mode. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  8. The Role of Selection Effects in the Contact Hypothesis: Results from a U.S. National Survey on Sexual Prejudice.

    Science.gov (United States)

    Loehr, Annalise; Doan, Long; Miller, Lisa R

    2015-11-01

    Empirical research has documented that contact with lesbians and gays is associated with more positive feelings toward and greater support for legal rights for them, but we know less about whether these effects extend to informal aspects of same-sex relationships, such as reactions to public displays of affection. Furthermore, many studies have assumed that contact influences levels of sexual prejudice; however, the possibility of selection effects, in which less sexually prejudiced people have contact, and more sexually prejudiced people do not, raises some doubts about this assumption. We used original data from a nationally representative sample of heterosexuals to determine whether those reporting contact with a lesbian, gay, bisexual, or transgender friend or relative exhibited less sexual prejudice toward lesbian and gay couples than those without contact. This study examined the effect of contact on attitudes toward formal rights and a relatively unexplored dimension, informal privileges. We estimated the effect of having contact using traditional (ordinary least squares regression) methods before accounting for selection effects using propensity score matching. After accounting for selection effects, we found no significant differences between the attitudes of those who had contact and those who did not, for either formal or informal measures. Thus, selection effects appeared to play a pivotal role in confounding the link between contact and sexual prejudice, and future studies should exercise caution in interpreting results that do not account for such selection effects.

  9. Hypothesis testing of matrix graph model with application to brain connectivity analysis.

    Science.gov (United States)

    Xia, Yin; Li, Lexin

    2017-09-01

    Brain connectivity analysis is now at the foreground of neuroscience research. A connectivity network is characterized by a graph, where nodes represent neural elements such as neurons and brain regions, and links represent statistical dependence that is often encoded in terms of partial correlation. Such a graph is inferred from the matrix-valued neuroimaging data such as electroencephalography and functional magnetic resonance imaging. There have been a good number of successful proposals for sparse precision matrix estimation under normal or matrix normal distribution; however, this family of solutions does not offer a direct statistical significance quantification for the estimated links. In this article, we adopt a matrix normal distribution framework and formulate the brain connectivity analysis as a precision matrix hypothesis testing problem. Based on the separable spatial-temporal dependence structure, we develop oracle and data-driven procedures to test both the global hypothesis that all spatial locations are conditionally independent, and simultaneous tests for identifying conditional dependent spatial locations with false discovery rate control. Our theoretical results show that the data-driven procedures perform asymptotically as well as the oracle procedures and enjoy certain optimality properties. The empirical finite-sample performance of the proposed tests is studied via intensive simulations, and the new tests are applied on a real electroencephalography data analysis. © 2016, The International Biometric Society.

  10. Selected sports talent development models

    OpenAIRE

    Michal Vičar

    2017-01-01

    Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmen...

  11. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.

    2012-01-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  12. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  13. Can evidence from genome-wide association studies and positive natural selection surveys be used to evaluate the thrifty gene hypothesis in East Asians?

    Science.gov (United States)

    Koh, Xuan-Han; Liu, Xuanyao; Teo, Yik-Ying

    2014-01-01

    Body fat deposition and distribution differ between East Asians and Europeans, and for the same level of obesity, East Asians are at higher risks of Type 2 diabetes (T2D) and other metabolic disorders. This observation has prompted the reclassifications of body mass index thresholds for the definitions of "overweight" and "obese" in East Asians. However, the question remains over what evolutionary mechanisms have driven the differences in adiposity morphology between two population groups that shared a common ancestor less than 80,000 years ago. The Thrifty Gene hypothesis has been suggested as a possible explanation, where genetic factors that allowed for efficient food-energy conversion and storage are evolutionarily favoured by conferring increased chances of survival and fertility. Here, we leveraged on the existing findings from genome-wide association studies and large-scale surveys of positive natural selection to evaluate whether there is currently any evidence to support the Thrifty Gene hypothesis. We first assess whether the existing genetic associations with obesity and T2D are located in genomic regions that are reported to be under positive selection, and if so, whether the risk alleles sit on the extended haplotype forms. In addition, we interrogate whether these risk alleles are the derived forms that differ from the ancestral alleles, and whether there is significant evidence of population differentiation at these SNPs between East Asian and European populations. Our systematic survey did not yield conclusive evidence to support the Thrifty Gene hypothesis as a possible explanation for the differences observed between East Asians and Europeans.

  14. Can evidence from genome-wide association studies and positive natural selection surveys be used to evaluate the thrifty gene hypothesis in East Asians?

    Directory of Open Access Journals (Sweden)

    Xuan-Han Koh

    Full Text Available Body fat deposition and distribution differ between East Asians and Europeans, and for the same level of obesity, East Asians are at higher risks of Type 2 diabetes (T2D and other metabolic disorders. This observation has prompted the reclassifications of body mass index thresholds for the definitions of "overweight" and "obese" in East Asians. However, the question remains over what evolutionary mechanisms have driven the differences in adiposity morphology between two population groups that shared a common ancestor less than 80,000 years ago. The Thrifty Gene hypothesis has been suggested as a possible explanation, where genetic factors that allowed for efficient food-energy conversion and storage are evolutionarily favoured by conferring increased chances of survival and fertility. Here, we leveraged on the existing findings from genome-wide association studies and large-scale surveys of positive natural selection to evaluate whether there is currently any evidence to support the Thrifty Gene hypothesis. We first assess whether the existing genetic associations with obesity and T2D are located in genomic regions that are reported to be under positive selection, and if so, whether the risk alleles sit on the extended haplotype forms. In addition, we interrogate whether these risk alleles are the derived forms that differ from the ancestral alleles, and whether there is significant evidence of population differentiation at these SNPs between East Asian and European populations. Our systematic survey did not yield conclusive evidence to support the Thrifty Gene hypothesis as a possible explanation for the differences observed between East Asians and Europeans.

  15. The linear utility model for optimal selection

    NARCIS (Netherlands)

    Mellenbergh, Gideon J.; van der Linden, Willem J.

    A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting

  16. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  17. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  18. Testing the strain hypothesis of the Demand Control Model to explain severe bullying at work

    NARCIS (Netherlands)

    Notelaers, G.; Baillien, E.; de Witte, H.; Einarsen, S.; Vermunt, J.K.

    2013-01-01

    Workplace bullying has often been attributed to work-related stress, and has been linked to the Job Demand Control Model. The current study aims to further these studies by testing the model for bullying in a heterogeneous sample and by using latent class (LC)-analyses to define different demands

  19. Bayesian estimation and hypothesis tests for a circular Generalized Linear Model

    NARCIS (Netherlands)

    Mulder, Kees; Klugkist, Irene

    2017-01-01

    Motivated by a study from cognitive psychology, we develop a Generalized Linear Model for circular data within the Bayesian framework, using the von Mises distribution. Although circular data arise in a wide variety of scientific fields, the number of methods for their analysis is limited. Our model

  20. Disruption of the LTD dialogue between the cerebellum and the cortex in Angelman syndrome model: a timing hypothesis

    Directory of Open Access Journals (Sweden)

    Guy eCheron

    2014-11-01

    Full Text Available Angelman syndrome is a genetic neurodevelopmental disorder in which cerebellar functioning impairment has been documented despite the absence of gross structural abnormalities. Characteristically, a spontaneous 160 Hz oscillation emerges in the Purkinje cells network of the Ube3am-/p+ Angelman mouse model. This abnormal oscillation is induced by enhanced Purkinje cell rhythmicity and hypersynchrony along the parallel fiber beam. We present a pathophysiological hypothesis for the neurophysiology underlying major aspects of the clinical phenotype of Angelman syndrome, including cognitive, language and motor deficits, involving long-range connection between the cerebellar and the cortical networks. This hypothesis states that the alteration of the cerebellar rhythmic activity impinges cerebellar long-term depression (LTD plasticity, which in turn alters the LTD plasticity in the cerebral cortex. This hypothesis was based on preliminary experiments using electrical stimulation of the whiskers pad performed in alert mice showing that after a 8 Hz LTD-inducing protocol, the cerebellar LTD accompanied by a delayed response in the wild type mice is missing in Ube3am-/p+ mice and that the LTD induced in the barrel cortex following the same peripheral stimulation in wild mice is reversed into a LTP in the Ube3am-/p+ mice. The control exerted by the cerebellum on the excitation vs inhibition balance in the cerebral cortex and possible role played by the timing plasticity of the Purkinje cell LTD on the spike–timing dependent plasticity (STDP of the pyramidal neurons are discussed in the context of the present hypothesis.

  1. A Theoretical Hypothesis on Ferris Wheel Model of University Social Responsibility

    OpenAIRE

    Le Kang

    2016-01-01

    According to the nature of the university, as a free and responsible academic community, USR is based on a different foundation —academic responsibility, so the Pyramid and the IC Model of CSR could not fully explain the most distinguished feature of USR. This paper sought to put forward a new model— Ferris Wheel Model, to illustrate the nature of USR and the process of achievement. The Ferris Wheel Model of USR shows the university creates a balanced, fairness and neutrality systemic structu...

  2. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...... and hence to reduce the Sharpe Ratio, a lower elasticity of substitution generates a more reasonable level for the equity risk premium and for the volatility of the government bond returns without compromising the ability of the price-dividend ratio to predict excess returns....

  3. Selection of classification models from repository of model for water ...

    African Journals Online (AJOL)

    This paper proposes a new technique, Model Selection Technique (MST) for selection and ranking of models from the repository of models by combining three performance measures (Acc, TPR and TNR). This technique provides weightage to each performance measure to find the most suitable model from the repository of ...

  4. Stressful life transitions and wellbeing: A comparison of the stress buffering hypothesis and the social identity model of identity change.

    Science.gov (United States)

    Praharso, Nurul F; Tear, Morgan J; Cruwys, Tegan

    2017-01-01

    The relationship between stressful life transitions and wellbeing is well established, however, the protective role of social connectedness has received mixed support. We test two theoretical models, the Stress Buffering Hypothesis and the Social Identity Model of Identity Change, to determine which best explains the relationship between social connectedness, stress, and wellbeing. Study 1 (N=165) was an experiment in which participants considered the impact of moving cities versus receiving a serious health diagnosis. Study 2 (N=79) was a longitudinal study that examined the adjustment of international students to university over the course of their first semester. Both studies found limited evidence for the buffering role of social support as predicted by the Stress Buffering Hypothesis; instead people who experienced a loss of social identities as a result of a stressor had a subsequent decline in wellbeing, consistent with the Social Identity Model of Identity Change. We conclude that stressful life events are best conceptualised as identity transitions. Such events are more likely to be perceived as stressful and compromise wellbeing when they entail identity loss. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Hypothesis Testing of Edge Organizations: Empirically Calibrating an Organizational Model for Experimentation

    Science.gov (United States)

    2007-06-01

    Jaber, M.Y., Kher, H. V., and Davis, D. J., “Countering forgetting through training and deployment,” International Journal of Production Economics , 85... Journal of Production Economics , 92, (2004), pp. 281-294. [22] Jin, Y. and Levitt, R.E., “The Virtual Design Team: A Computational Model of Project...2003), pp. 33-46. [21] Jaber, M.Y. and Sikstrom, S., “A numerical comparison of three potential learning and forgetting models,” International

  6. Testing the model-observer similarity hypothesis with text-based worked examples

    NARCIS (Netherlands)

    Hoogerheide, V.; Loyens, S.M.M.; Jadi, Fedora; Vrins, Anna; van Gog, T.

    2017-01-01

    Example-based learning is a very effective and efficient instructional strategy for novices. It can be implemented using text-based worked examples that provide a written demonstration of how to perform a task, or (video) modelling examples in which an instructor (the ‘model’) provides a

  7. Hypothesis test of mediation effect in causal mediation model with high-dimensional continuous mediators.

    Science.gov (United States)

    Huang, Yen-Tsung; Pan, Wen-Chi

    2016-06-01

    Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival. © 2015, The International Biometric Society.

  8. Testing the Hypothesis of the Multidimensional Model of Anorexia Nervosa in Adolescents.

    Science.gov (United States)

    Lyon, Maureen; Chatoor, Irene; Atkins, Darlene; Silber, Tomas; Mosimann, James; Gray, James

    1997-01-01

    Tested six hypothesized risk factors of a model for anorexia nervosa. Results confirmed three of the risk factors: family history of depression, feelings of ineffectiveness, and poor interceptive awareness. Alcohol and drug abuse also figured prominently in the family history of patients with anorexia nervosa. (RJM)

  9. An Emerging Role for Numerical Modelling in Wildfire Behavior Research: Explorations, Explanations, and Hypothesis Development

    Science.gov (United States)

    Linn, R.; Winterkamp, J.; Canfield, J.; Sauer, J.; Dupuy, J. L.; Finney, M.; Hoffman, C.; Parsons, R.; Pimont, F.; Sieg, C.; Forthofer, J.

    2014-12-01

    The human capacity for altering the water cycle has been well documented and given the expected change due to population, income growth, biofuels, climate, and associated land use change, there remains great uncertainty in both the degree of increased pressure on land and water resources and in our ability to adapt to these changes. Alleviating regional shortages in water supply can be carried out in a spatial hierarchy through i) direct trade of water between all regions, ii) development of infrastructure to improve water availability within regions (e.g. impounding rivers), iii) via inter-basin hydrological transfer between neighboring regions and, iv) via virtual water trade. These adaptation strategies can be managed via market trade in water and commodities to identify those strategies most likely to be adopted. This work combines the physically-based University of New Hampshire Water Balance Model (WBM) with the macro-scale Purdue University Simplified International Model of agricultural Prices Land use and the Environment (SIMPLE) to explore the interaction of supply and demand for fresh water globally. In this work we use a newly developed grid cell-based version of SIMPLE to achieve a more direct connection between the two modeling paradigms of physically-based models with optimization-driven approaches characteristic of economic models. We explore questions related to the global and regional impact of water scarcity and water surplus on the ability of regions to adapt to future change. Allowing for a variety of adaptation strategies such as direct trade of water and expanding the built water infrastructure, as well as indirect trade in commodities, will reduce overall global water stress and, in some regions, significantly reduce their vulnerability to these future changes.

  10. Model Based Segmentation And Hypothesis Generation For The Recognition Of Printed Documents

    Science.gov (United States)

    Dengel, A.; Luhn, A.; Ueberreiter, B.

    1988-04-01

    The task of document recognition requires the scanning of a paper document and the analysis of its content and structure. The resulting electronic representation has to capture the content as well as the logic and layout structure of the document. The first step in the recognition process is scanning, filtering and binarization of the paper document. Based on the preprocessing results we delineate key areas like address or signature for a letter, or the abstract for a report. This segmentation procedure uses a specific document layout model. The validity of this segmentation can be verified in a second step by using the results of more time-consuming procedures like text/graphic classification, optical character recognition (OCR) and the comparison with more elaborate models for specific document parts. Thus our concept of model driven segmentation allows quick focussing of the analysis on important regions. The segmentation is able to operate directly on the raster image of a document without necessarily requiring CPU-intensive preprocessing steps for the whole document. A test version for the analysis of simple business letters has been implemented.

  11. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  12. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  13. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  14. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  15. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  16. Graphical tools for model selection in generalized linear models.

    Science.gov (United States)

    Murray, K; Heritier, S; Müller, S

    2013-11-10

    Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Early animal models of rickets and proof of a nutritional deficiency hypothesis.

    Science.gov (United States)

    Chesney, Russell W

    2012-03-01

    In the period between 1880 and 1930, the role of nutrition and nutritional deficiency as a cause of rickets was established based upon the results from 6 animal models of rickets. This greatly prevalent condition (60%-90% in some locales) in children of the industrialized world was an important clinical research topic. What had to be reconciled was that rickets was associated with infections, crowding, and living in northern latitudes, and cod liver oil was observed to prevent or cure the disease. Several brilliant insights opened up a new pathway to discovery using animal models of rickets. Studies in lion cubs, dogs, and rats showed the importance of cod liver oil and an antirachitic substance later termed vitamin D. They showed that fats in the diet were required, that vitamin D had a secosteroid structure and was different from vitamin A, and that ultraviolet irradiation could prevent or cure rickets. Several of these experiments had elements of serendipity in that certain dietary components and the presence or absence of sunshine or ultraviolet irradiation could critically change the course of rickets. Nonetheless, at the end of these studies, a nutritional deficiency of vitamin D resulting from a poor diet or lack of adequate sunshine was firmly established as a cause of rickets.

  18. Mild anastomotic stenosis in patient-specific CABG model may enhance graft patency: a new hypothesis.

    Directory of Open Access Journals (Sweden)

    Yunlong Huo

    Full Text Available It is well known that flow patterns at the anastomosis of coronary artery bypass graft (CABG are complex and may affect the long-term patency. Various attempts at optimal designs of anastomosis have not improved long-term patency. Here, we hypothesize that mild anastomotic stenosis (area stenosis of about 40-60% may be adaptive to enhance the hemodynamic conditions, which may contribute to slower progression of atherosclerosis. We further hypothesize that proximal/distal sites to the stenosis have converse changes that may be a risk factor for the diffuse expansion of atherosclerosis from the site of stenosis. Twelve (12 patient-specific models with various stenotic degrees were extracted from computed tomography images using a validated segmentation software package. A 3-D finite element model was used to compute flow patterns including wall shear stress (WSS and its spatial and temporal gradients (WSS gradient, WSSG, and oscillatory shear index, OSI. The flow simulations showed that mild anastomotic stenosis significantly increased WSS (>15 dynes · cm(-2 and decreased OSI (<0.02 to result in a more uniform distribution of hemodynamic parameters inside anastomosis albeit proximal/distal sites to the stenosis have a decrease of WSS (<4 dynes · cm(-2. These findings have significant implications for graft adaptation and long-term patency.

  19. Mild anastomotic stenosis in patient-specific CABG model may enhance graft patency: a new hypothesis.

    Science.gov (United States)

    Huo, Yunlong; Luo, Tong; Guccione, Julius M; Teague, Shawn D; Tan, Wenchang; Navia, José A; Kassab, Ghassan S

    2013-01-01

    It is well known that flow patterns at the anastomosis of coronary artery bypass graft (CABG) are complex and may affect the long-term patency. Various attempts at optimal designs of anastomosis have not improved long-term patency. Here, we hypothesize that mild anastomotic stenosis (area stenosis of about 40-60%) may be adaptive to enhance the hemodynamic conditions, which may contribute to slower progression of atherosclerosis. We further hypothesize that proximal/distal sites to the stenosis have converse changes that may be a risk factor for the diffuse expansion of atherosclerosis from the site of stenosis. Twelve (12) patient-specific models with various stenotic degrees were extracted from computed tomography images using a validated segmentation software package. A 3-D finite element model was used to compute flow patterns including wall shear stress (WSS) and its spatial and temporal gradients (WSS gradient, WSSG, and oscillatory shear index, OSI). The flow simulations showed that mild anastomotic stenosis significantly increased WSS (>15 dynes · cm(-2)) and decreased OSI (<0.02) to result in a more uniform distribution of hemodynamic parameters inside anastomosis albeit proximal/distal sites to the stenosis have a decrease of WSS (<4 dynes · cm(-2)). These findings have significant implications for graft adaptation and long-term patency.

  20. The Long-Standing Antarctic Mantle Plume Hypothesis and Modeling Ongoing Glacial Isostatic Adjustment

    Science.gov (United States)

    Ivins, E. R.; Seroussi, H. L.; Wiens, D.; Larour, E. Y.

    2016-12-01

    Alkaline basalts of the Marie Byrd Land (MBL) have been interpreted as evidence of a mantle plume impinging on the lithosphere from below at about 85-80 Ma and again at 30-20 Ma. Because of the lack of structural and stratigraphic mapping due to ice sheet cover, and even a general lack of sufficient bottom topography, it is impossible to identify and classify the main characteristics of such a putative plume with respect to ones that are well-studied, such as the Yellowstone or Raton hotspots. Recent POLENET seismic mapping has identified possible plume structures that could extend across the upper mantle beneath the Ruppert Coast (RC) in southeast MBL, and possible plume beneath the Bentley Subglacial Trench (BST), some 1000 km to the southwest of RC, and on the opposite side of MBL. Mapping of subglacial lakes via altimetry allows reconstruction of basal conditions that are consistent with melt generation rates and patterns of basal water routing. We extensively model the hotspot heat flux caused by a plume buried beneath the crust of the West Antarctic Ice Sheet (WAIS) and employing set of 3-D thermomechanical Stokes flow simulations with the Ice Sheet System Model (ISSM). We discover that a mantle upwelling structure beneath the BST, upstream of Subglacial Lake Whillans (SLW) and Whillans Ice Stream is compatible when the peak plume-related geothermal heat flux, qGHF, approaches 200 mW/m^2, rather consistent with heat flux measurements at the WISSARD core site where heat flux probes penetrated into sediments of SLW. For a plume at RC the ISSM predictions do allow a plume, consistent with seismic mapping, but require the peak plume flux to be upper bound by qGHF ≤ 150 mW/m^2. New maps of the relatively slower upper mantle shear wave velocity beneath WAIS reveal that the slowest velocity corresponds to mantle below MLB. Using our new constraints on a 3-D plume interpretation of this slowness, we determine the perturbations to GIA modeling that are required to

  1. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  2. Quorum sensing in CD4+ T cell homeostasis: a hypothesis and a model.

    Directory of Open Access Journals (Sweden)

    Afonso R.M. Almeida

    2012-05-01

    Full Text Available Homeostasis of lymphocyte numbers is believed to be due to competition between cellular populations for a common niche of restricted size, defined by the combination of interactions and trophic factors required for cell survival. Here we propose a new mechanism: homeostasis of lymphocyte numbers could also be achieved by the ability of lymphocytes to perceive the density of their own populations. Such a mechanism would be reminiscent of the primordial quorum sensing systems used by bacteria, in which some bacteria sense the accumulation of bacterial metabolites secreted by other elements of the population, allowing them to count the number of cells present and adapt their growth accordingly. We propose that homeostasis of CD4+ T cell numbers may occur via a quorum-sensing-like mechanism, where IL-2 is produced by activated CD4+ T cells and sensed by a population of CD4+ Treg cells that expresses the high-affinity IL-2Rα-chain and can regulate the number of activated IL-2-producing CD4+ T cells and the total CD4+T cell population. In other words, CD4+ T cell populations can restrain their growth by monitoring the number of activated cells, thus preventing uncontrolled lymphocyte proliferation during immune responses. We hypothesize that malfunction of this quorum-sensing mechanism may lead to uncontrolled T cell activation and autoimmunity. Finally, we present a mathematical model that describes the role of IL-2 and quorum-sensing mechanisms in CD4+ T cell homeostasis during an immune response.

  3. Selecting model complexity in learning problems

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L. [Los Alamos National Lab., NM (United States); Kumar, P.R. [Illinois Univ., Urbana, IL (United States). Coordinated Science Lab.

    1993-10-01

    To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.

  4. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  5. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  6. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  7. Blue and green egg-color intensity is associated with parental effort and mating system in passerines: support for the sexual selection hypothesis.

    Science.gov (United States)

    Soler, Juan J; Moreno, Juan; Avilés, Jesús M; Møller, Anders P

    2005-03-01

    Among several adaptive explanations proposed to account for variation in avian egg color, that related to sexual selection is of particular interest because of its possible generality. Briefly, it proposes that because biliverdin (the pigment responsible for blue-green eggshell coloration) is an antioxidant, deposition in the eggshell by laying females may signal the capacity of females to control free radicals, despite the handicap of removing this antioxidant from their body. If males adjust parental effort in response to the intensity of the blue coloration of eggs, thereby investing more in the offspring of high-quality mates, blue eggs may represent a postmating sexually selected signal in females. Here, by image and spectrophotometric analyses of the eggs of European passerines, we tested two different predictions of the hypothesis. First, variables related to intraspecific variation in parental effort (i.e., the duration of the nestling period controlled for body mass) should be positively related to the intensity of blue-green color of the eggshell across species. Second, there should be a positive relationship between intensity of blue-green color of eggs and degree of polygyny. These predictions were supported: intensity of blue-green coloration (i.e., chroma) was significantly related to the duration of the nestling period and to degree of polygyny after controlling for possible confounding variables (i.e., body mass, incubation period, and nest type) and similarity due to common descent. Nest type (hole or nonhole) also explained a significant proportion of variation in egg chroma, perhaps reflecting different selection pressures (i.e., light conditions, risk of parasitism) affecting species with the two types of nests.

  8. On spatial mutation-selection models

    Energy Technology Data Exchange (ETDEWEB)

    Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Minlos, Robert, E-mail: minl@iitp.ru; Pirogov, Sergey, E-mail: pirogov@proc.ru [IITP, RAS, Bolshoi Karetnyi 19, Moscow (Russian Federation)

    2013-11-15

    We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.

  9. Sparse model selection via integral terms

    Science.gov (United States)

    Schaeffer, Hayden; McCalla, Scott G.

    2017-08-01

    Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations. In this work, we develop a learning approach for the selection and identification of a dynamical system directly from noisy data. The learning is performed by extracting a small subset of important features from an overdetermined set of possible features using a nonconvex sparse regression model. The sparse regression model is constructed to fit the noisy data to the trajectory of the dynamical system while using the smallest number of active terms. Computational experiments detail the model's stability, robustness to noise, and recovery accuracy. Examples include nonlinear equations, population dynamics, chaotic systems, and fast-slow systems.

  10. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  11. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect

    KAUST Repository

    Demetrius, Lloyd A.

    2015-01-14

    Epidemiological and biochemical studies show that the sporadic forms of Alzheimer\\'s disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events—mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  12. Modeling and Selection of Software Service Variants

    OpenAIRE

    Wittern, John Erik

    2015-01-01

    Providers and consumers have to deal with variants, meaning alternative instances of a service?s design, implementation, deployment, or operation, when developing or delivering software services. This work presents service feature modeling to deal with associated challenges, comprising a language to represent software service variants and a set of methods for modeling and subsequent variant selection. This work?s evaluation includes a POC implementation and two real-life use cases.

  13. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platfor...

  14. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  15. Hypothesis in research

    Directory of Open Access Journals (Sweden)

    Eudaldo Enrique Espinoza Freire

    2018-01-01

    Full Text Available It is intended with this work to have a material with the fundamental contents, which enable the university professor to formulate the hypothesis, for the development of an investigation, taking into account the problem to be solved. For its elaboration, the search of information in primary documents was carried out, such as thesis of degree and reports of research results, selected on the basis of its relevance with the analyzed subject, current and reliability, secondary documents, as scientific articles published in journals of recognized prestige, the selection was made with the same terms as in the previous documents. It presents a conceptualization of the updated hypothesis, its characterization and an analysis of the structure of the hypothesis in which the determination of the variables is deepened. The involvement of the university professor in the teaching-research process currently faces some difficulties, which are manifested, among other aspects, in an unstable balance between teaching and research, which leads to a separation between them.

  16. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  17. Model selection criterion in survival analysis

    Science.gov (United States)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  18. On Using Selection Procedures with Binomial Models.

    Science.gov (United States)

    1983-10-01

    eds.), Shinko Tsusho Co. Ltd., Tokyo, Japan , pp. 501-533. Gupta, S. S. and Sobel, M. (1960). Selecting a subset containing the best of several...IA_____3_6r__I____ *TITLE food A$ieweI L TYPE of 09PORT 6 PERIOD COVERED ON USING SELECTION PROCEDURES WITH BINOMIAL MODELS Technical 6. PeSPRFeauS1 ONG. REPORT...ontoedis stoc toeSI. to Ei.,..,t&* toemR.,. 14. SUPPOLEMENTARY MOCTES 19. Rey WORDS (Coatiou. 40 ow.oa* edo if Necesary and #do""&a by block number

  19. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  20. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  1. NonpModelCheck: An R Package for Nonparametric Lack-of-Fit Testing and Variable Selection

    Directory of Open Access Journals (Sweden)

    Adriano Zanin Zambom

    2017-05-01

    Full Text Available We describe the R package NonpModelCheck for hypothesis testing and variable selection in nonparametric regression. This package implements functions to perform hypothesis testing for the significance of a predictor or a group of predictors in a fully nonparametric heteroscedastic regression model using high-dimensional one-way ANOVA. Based on the p values from the test of each covariate, three different algorithms allow the user to perform variable selection using false discovery rate corrections. A function for classical local polynomial regression is implemented for the multivariate context, where the degree of the polynomial can be as large as needed and bandwidth selection strategies are built in.

  2. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  3. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  4. Chemical identification using Bayesian model selection

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom; Fry, H. A. (Herbert A.); McVey, B. D. (Brian D.); Sander, E. (Eric)

    2002-01-01

    Remote detection and identification of chemicals in a scene is a challenging problem. We introduce an approach that uses some of the image's pixels to establish the background characteristics while other pixels represent the target for which we seek to identify all chemical species present. This leads to a generalized least squares problem in which we focus on 'subset selection' to identify the chemicals thought to be present. Bayesian model selection allows us to approximate the posterior probability that each chemical in the library is present by adding the posterior probabilities of all the subsets which include the chemical. We present results using realistic simulated data for the case with 1 to 5 chemicals present in each target and compare performance to a hybrid of forward and backward stepwise selection procedure using the F statistic.

  5. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  6. The childhood maltreatment influences on breast cancer patients: A second wave hit model hypothesis for distinct biological and behavioral response.

    Science.gov (United States)

    Bandinelli, Lucas Poitevin; Levandowski, Mateus Luz; Grassi-Oliveira, Rodrigo

    2017-10-01

    Stress and cancer are two complex situations involving different biological and psychological mechanisms. Their relationship have long been studied, and there is evidence of the impact stress has on both, development and disease progression. Furthermore, early stress has been studied as an important factor associated to this relationship, since its impacts on the immune, endocrine and cognitive development throughout life is already known. Therefore, understanding early stress as a first wave of stress in life is necessary in order to explore a possible second wave hit model. From this perspective, we believe that breast cancer can be understood as a second wave of stress during development and that, in addition to the first wave, can cause important impacts on the response to cancer treatment, such as increased chances of disease progression and distinct behavioral responses. In this article we propose a second wave hit hypothesis applied to breast cancer and its implications on the immune, endocrine and cognitive systems, through mechanisms that involve the HPA axis and subsequent activations of stress responses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Reserve selection using nonlinear species distribution models.

    Science.gov (United States)

    Moilanen, Atte

    2005-06-01

    Reserve design is concerned with optimal selection of sites for new conservation areas. Spatial reserve design explicitly considers the spatial pattern of the proposed reserve network and the effects of that pattern on reserve cost and/or ability to maintain species there. The vast majority of reserve selection formulations have assumed a linear problem structure, which effectively means that the biological value of a potential reserve site does not depend on the pattern of selected cells. However, spatial population dynamics and autocorrelation cause the biological values of neighboring sites to be interdependent. Habitat degradation may have indirect negative effects on biodiversity in areas neighboring the degraded site as a result of, for example, negative edge effects or lower permeability for animal movement. In this study, I present a formulation and a spatial optimization algorithm for nonlinear reserve selection problems in grid-based landscapes that accounts for interdependent site values. The method is demonstrated using habitat maps and nonlinear habitat models for threatened birds in the Netherlands, and it is shown that near-optimal solutions are found for regions consisting of up to hundreds of thousands grid cells, a landscape size much larger than those commonly attempted even with linear reserve selection formulations.

  8. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  9. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  10. A simple parametric model selection test

    OpenAIRE

    Susanne M. Schennach; Daniel Wilhelm

    2014-01-01

    We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...

  11. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  12. Novel metrics for growth model selection.

    Science.gov (United States)

    Grigsby, Matthew R; Di, Junrui; Leroux, Andrew; Zipunnikov, Vadim; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-01-01

    Literature surrounding the statistical modeling of childhood growth data involves a diverse set of potential models from which investigators can choose. However, the lack of a comprehensive framework for comparing non-nested models leads to difficulty in assessing model performance. This paper proposes a framework for comparing non-nested growth models using novel metrics of predictive accuracy based on modifications of the mean squared error criteria. Three metrics were created: normalized, age-adjusted, and weighted mean squared error (MSE). Predictive performance metrics were used to compare linear mixed effects models and functional regression models. Prediction accuracy was assessed by partitioning the observed data into training and test datasets. This partitioning was constructed to assess prediction accuracy for backward (i.e., early growth), forward (i.e., late growth), in-range, and on new-individuals. Analyses were done with height measurements from 215 Peruvian children with data spanning from near birth to 2 years of age. Functional models outperformed linear mixed effects models in all scenarios tested. In particular, prediction errors for functional concurrent regression (FCR) and functional principal component analysis models were approximately 6% lower when compared to linear mixed effects models. When we weighted subject-specific MSEs according to subject-specific growth rates during infancy, we found that FCR was the best performer in all scenarios. With this novel approach, we can quantitatively compare non-nested models and weight subgroups of interest to select the best performing growth model for a particular application or problem at hand.

  13. The Gateway Hypothesis, Common Liability to Addictions or the Route of Administration Model A Modelling Process Linking the Three Theories.

    Science.gov (United States)

    Mayet, Aurélie; Legleye, Stéphane; Beck, François; Falissard, Bruno; Chau, Nearkasen

    2016-01-01

    The aim of this study was to describe the transitions between tobacco (T), cannabis (C) and other illicit drugs (OIDs) initiations, to simultaneously explore several substance use theories: gateway theory (GT), common liability model (CLM) and route of administration model (RAM). Data from 2 French nationwide surveys conducted in 2005 and 2010 were used (16,421 subjects aged 18-34). Using reported ages at initiations, we reconstituted a retrospective cohort describing all initiation sequences between T, C and OID. Transition probabilities between the substances were computed using a Markov multi-state model that also tested the effect of 2 latent variables (item response theory scores reflecting propensity for early onset and further substance use) on all transitions. T initiation was associated with increased likelihood of subsequent C initiation, but the reverse relationship was also observed. While the most likely initiation sequence among subjects who initiated the 3 groups of substances was the 'gateway' sequence T x2192; C x2192; OID, this pattern was not associated with substance use propensity more than alternative sequences. Early use propensity was associated with the 'gateway' sequence but also with some alternative ones beginning with T, C or OID. If the gateway sequence appears as the most likely pattern, in line with GT, the effects of early onset and substance use propensities were also observed for some alternative sequences, which is more in line with CLM. RAM could explain reciprocal interactions observed between T and C. This suggests shared influences of individual (personality traits) and environmental (substance availability, peer influence) characteristics. © 2015 S. Karger AG, Basel.

  14. Differences between selection on sex versus recombination in red queen models with diploid hosts.

    Science.gov (United States)

    Agrawal, Aneil F

    2009-08-01

    The Red Queen hypothesis argues that parasites generate selection for genetic mixing (sex and recombination) in their hosts. A number of recent papers have examined this hypothesis using models with haploid hosts. In these haploid models, sex and recombination are selectively equivalent. However, sex and recombination are not equivalent in diploids because selection on sex depends on the consequences of segregation as well as recombination. Here I compare how parasites select on modifiers of sexual reproduction and modifiers of recombination rate. Across a wide set of parameters, parasites tend to select against both sex and recombination, though recombination is favored more often than is sex. There is little correspondence between the conditions favoring sex and those favoring recombination, indicating that the direction of selection on sex is often determined by the effects of segregation, not recombination. Moreover, when sex was favored it is usually due to a long-term advantage whereas short-term effects are often responsible for selection favoring recombination. These results strongly indicate that Red Queen models focusing exclusively on the effects of recombination cannot be used to infer the type of selection on sex that is generated by parasites on diploid hosts.

  15. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  16. Disentangling community functional components in a litter-macrodetritivore model system revels the predominance of the mass ratio hypothesis

    NARCIS (Netherlands)

    Bila, K.; Moretti, M.; de Bello, F.; Dias, A.T.C.; Pezzatti, G.B.; van Oosten, A.R.; Berg, M.P.

    2014-01-01

    Recent investigations have shown that two components of community trait composition are important for key ecosystem processes: (i) the community-weighted mean trait value (CWM), related to the mass ratio hypothesis and dominant trait values in the community, and (ii) functional diversity (FD),

  17. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online.......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve...... this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion...

  18. Comparative Study on the Selection Criteria for Fitting Flood Frequency Distribution Models with Emphasis on Upper-Tail Behavior

    Directory of Open Access Journals (Sweden)

    Xiaohong Chen

    2017-05-01

    Full Text Available The upper tail of a flood frequency distribution is always specifically concerned with flood control. However, different model selection criteria often give different optimal distributions when the focus is on the upper tail of distribution. With emphasis on the upper-tail behavior, five distribution selection criteria including two hypothesis tests and three information-based criteria are evaluated in selecting the best fitted distribution from eight widely used distributions by using datasets from Thames River, Wabash River, Beijiang River and Huai River. The performance of the five selection criteria is verified by using a composite criterion with focus on upper tail events. This paper demonstrated an approach for optimally selecting suitable flood frequency distributions. Results illustrate that (1 there are different selections of frequency distributions in the four rivers by using hypothesis tests and information-based criteria approaches. Hypothesis tests are more likely to choose complex, parametric models, and information-based criteria prefer to choose simple, effective models. Different selection criteria have no particular tendency toward the tail of the distribution; (2 The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. The distributions selected by information-based criteria are more likely to be close to true values than the distributions selected by hypothesis test methods in the upper tail of the frequency curve; (3 The proposed composite criterion not only can select the optimal distribution, but also can evaluate the error of estimated value, which often plays an important role in the risk assessment and engineering design. In order to decide on a particular distribution to fit the high flow, it would be better to use the composite criterion.

  19. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  20. Architecture-based multiscale computational modeling of plant cell wall mechanics to examine the hydrogen-bonding hypothesis of the cell wall network structure model.

    Science.gov (United States)

    Yi, Hojae; Puri, Virendra M

    2012-11-01

    A primary plant cell wall network was computationally modeled using the finite element approach to study the hypothesis of hemicellulose (HC) tethering with the cellulose microfibrils (CMFs) as one of the major load-bearing mechanisms of the growing cell wall. A computational primary cell wall network fragment (10 × 10 μm) comprising typical compositions and properties of CMFs and HC was modeled with well-aligned CMFs. The tethering of HC to CMFs is modeled in accordance with the strength of the hydrogen bonding by implementing a specific load-bearing connection (i.e. the joint element). The introduction of the CMF-HC interaction to the computational cell wall network model is a key to the quantitative examination of the mechanical consequences of cell wall structure models, including the tethering HC model. When the cell wall network models with and without joint elements were compared, the hydrogen bond exhibited a significant contribution to the overall stiffness of the cell wall network fragment. When the cell wall network model was stretched 1% in the transverse direction, the tethering of CMF-HC via hydrogen bonds was not strong enough to maintain its integrity. When the cell wall network model was stretched 1% in the longitudinal direction, the tethering provided comparable strength to maintain its integrity. This substantial anisotropy suggests that the HC tethering with hydrogen bonds alone does not manifest sufficient energy to maintain the integrity of the cell wall during its growth (i.e. other mechanisms are present to ensure the cell wall shape).

  1. Selecting a model of supersymmetry breaking mediation

    International Nuclear Information System (INIS)

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-01-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the μ parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  2. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  3. Psyche Mission: Scientific Models and Instrument Selection

    Science.gov (United States)

    Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.

    2017-12-01

    NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end

  4. Occurrence of tributyltin (TBT)-resistant bacteria is not related to TBT pollution in Mekong River and coastal sediment: with a hypothesis of selective pressure from suspended solid.

    Science.gov (United States)

    Suehiro, Fujiyo; Mochizuki, Hiroko; Nakamura, Shinji; Iwata, Hisato; Kobayashi, Takeshi; Tanabe, Shinsuke; Fujimori, Yoshifumi; Nishimura, Fumitake; Tuyen, Bui Cach; Tana, Touch Seang; Suzuki, Satoru

    2007-07-01

    Tributyltin (TBT) is organotin compound that is toxic to aquatic life ranging from bacteria to mammals. This study examined the concentration of TBT in sediment from and near the Mekong River and the distribution of TBT-resistant bacteria. TBT concentrations ranged from TBT-resistant bacteria ranged TBT-resistant bacteria ranged from TBT in the sediment and of TBT-resistant bacteria were unrelated, and chemicals other than TBT might induce TBT resistance. TBT-resistant bacteria were more abundant in the dry season than in the rainy season. Differences in the selection process of TBT-resistant bacteria between dry and rainy seasons were examined using an advection-diffusion model of a suspended solid (SS) that conveys chemicals. The estimated dilution-diffusion time over a distance of 120 km downstream from a release site was 20 days during dry season and 5 days during rainy season, suggesting that bacteria at the sediment surface could be exposed to SS for longer periods during dry season.

  5. High-Temperature Isomerization of Benzenoid Polycyclic Aromatic Hydrocarbons. Analysis through the Bent Bond and Antiperiplanar Hypothesis Orbital Model.

    Science.gov (United States)

    Parent, Jean-François; Deslongchamps, Pierre

    2018-03-16

    L. T. Scott has discovered the 1,2-swapping of carbon and hydrogen atoms which is known to take place on benzenoid aromatics (up to ∼1000 °C range). For example, 13 C-1-naphthalene is specifically converted to 13 C-2-naphthalene, and there is evidence that this occurs through the formation of benzofulvene and a naphthalene-carbene intermediate. Application of the bent bond/antiperiplanar hypothesis leads to the postulate that higher in energy pyramidal singlet diradical intermediates can be used to propose a mechanism that rationalizes various atom rearrangements on benzenoid aromatics and related isomeric compounds.

  6. Physiopathological Hypothesis of Cellulite

    OpenAIRE

    de Godoy, Jos? Maria Pereira; de Godoy, Maria de F?tima Guerreiro

    2009-01-01

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct ...

  7. Variability: A Pernicious Hypothesis.

    Science.gov (United States)

    Noddings, Nel

    1992-01-01

    The hypothesis of greater male variability in test results is discussed in its historical context, and reasons feminists have objected to the hypothesis are considered. The hypothesis acquires political importance if it is considered that variability results from biological, rather than cultural, differences. (SLD)

  8. A new Russell model for selecting suppliers

    NARCIS (Netherlands)

    Azadi, Majid; Shabani, Amir; Farzipoor Saen, Reza

    2014-01-01

    Recently, supply chain management (SCM) has been considered by many researchers. Supplier evaluation and selection plays a significant role in establishing an effective SCM. One of the techniques that can be used for selecting suppliers is data envelopment analysis (DEA). In some situations, to

  9. Selective experimental review of the Standard Model

    International Nuclear Information System (INIS)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are α/sub s/, α/sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, Mμ, M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta 1 , theta 2 , theta 3 , and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant α/sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring α/sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures

  10. On theoretical models of gene expression evolution with random genetic drift and natural selection.

    Science.gov (United States)

    Ogasawara, Osamu; Okubo, Kousaku

    2009-11-20

    The relative contributions of natural selection and random genetic drift are a major source of debate in the study of gene expression evolution, which is hypothesized to serve as a bridge from molecular to phenotypic evolution. It has been suggested that the conflict between views is caused by the lack of a definite model of the neutral hypothesis, which can describe the long-run behavior of evolutionary change in mRNA abundance. Therefore previous studies have used inadequate analogies with the neutral prediction of other phenomena, such as amino acid or nucleotide sequence evolution, as the null hypothesis of their statistical inference. In this study, we introduced two novel theoretical models, one based on neutral drift and the other assuming natural selection, by focusing on a common property of the distribution of mRNA abundance among a variety of eukaryotic cells, which reflects the result of long-term evolution. Our results demonstrated that (1) our models can reproduce two independently found phenomena simultaneously: the time development of gene expression divergence and Zipf's law of the transcriptome; (2) cytological constraints can be explicitly formulated to describe long-term evolution; (3) the model assuming that natural selection optimized relative mRNA abundance was more consistent with previously published observations than the model of optimized absolute mRNA abundances. The models introduced in this study give a formulation of evolutionary change in the mRNA abundance of each gene as a stochastic process, on the basis of previously published observations. This model provides a foundation for interpreting observed data in studies of gene expression evolution, including identifying an adequate time scale for discriminating the effect of natural selection from that of random genetic drift of selectively neutral variations.

  11. Physiopathological Hypothesis of Cellulite

    Science.gov (United States)

    de Godoy, José Maria Pereira; de Godoy, Maria de Fátima Guerreiro

    2009-01-01

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct diagnosis of cellulite and the technique employed are fundamental to success. PMID:19756187

  12. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  13. Life Origination Hydrate Hypothesis (LOH-Hypothesis).

    Science.gov (United States)

    Ostrovskii, Victor; Kadyshevich, Elena

    2012-01-04

    The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis), according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides), DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their "thermodynamic front" guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.

  14. Life Origination Hydrate Hypothesis (LOH-Hypothesis

    Directory of Open Access Journals (Sweden)

    Victor Ostrovskii

    2012-01-01

    Full Text Available The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis, according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides, DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.

  15. The selectivity of Vibrio cholerae H-NOX for gaseous ligands follows the "sliding scale rule" hypothesis. Ligand interactions with both ferrous and ferric Vc H-NOX.

    Science.gov (United States)

    Wu, Gang; Liu, Wen; Berka, Vladimir; Tsai, Ah-lim

    2013-12-31

    Vc H-NOX (or VCA0720) is an H-NOX (heme-nitric oxide and oxygen binding) protein from facultative aerobic bacterium Vibrio cholerae. It shares significant sequence homology with soluble guanylyl cyclase (sGC), a NO sensor protein commonly found in animals. Similar to sGC, Vc H-NOX binds strongly to NO and CO with affinities of 0.27 nM and 0.77 μM, respectively, but weakly to O2. When positioned on a "sliding scale" plot [Tsai, A.-l., et al. (2012) Biochemistry 51, 172-186], the line connecting log K(D)(NO) and log K(D)(CO) of Vc H-NOX can almost be superimposed with that of Ns H-NOX. Therefore, the measured affinities and kinetic parameters of gaseous ligands to Vc H-NOX provide more evidence to validate the "sliding scale rule" hypothesis. Like sGC, Vc H-NOX binds NO in multiple steps, forming first a six-coordinate heme-NO complex at a rate of 1.1 × 10(9) M(-1) s(-1), and then converts to a five-coordinate heme-NO complex at a rate that is also dependent on NO concentration. Although the formation of oxyferrous Vc H-NOX cannot be detected at a normal atmospheric oxygen level, ferrous Vc H-NOX is oxidized to the ferric form at a rate of 0.06 s(-1) when mixed with O2. Ferric Vc H-NOX exists as a mixture of high- and low-spin states and is influenced by binding to different ligands. Characterization of both ferric and ferrous Vc H-NOX and their complexes with various ligands lays the foundation for understanding the possible dual roles in gas and redox sensing of Vc H-NOX.

  16. Natural and sexual selection giveth and taketh away reproductive barriers: models of population divergence in guppies.

    Science.gov (United States)

    Labonne, Jacques; Hendry, Andrew P

    2010-07-01

    The standard predictions of ecological speciation might be nuanced by the interaction between natural and sexual selection. We investigated this hypothesis with an individual-based model tailored to the biology of guppies (Poecilia reticulata). We specifically modeled the situation where a high-predation population below a waterfall colonizes a low-predation population above a waterfall. Focusing on the evolution of male color, we confirm that divergent selection causes the appreciable evolution of male color within 20 generations. The rate and magnitude of this divergence were reduced when dispersal rates were high and when female choice did not differ between environments. Adaptive divergence was always coupled to the evolution of two reproductive barriers: viability selection against immigrants and hybrids. Different types of sexual selection, however, led to contrasting results for another potential reproductive barrier: mating success of immigrants. In some cases, the effects of natural and sexual selection offset each other, leading to no overall reproductive isolation despite strong adaptive divergence. Sexual selection acting through female choice can thus strongly modify the effects of divergent natural selection and thereby alter the standard predictions of ecological speciation. We also found that under no circumstances did divergent selection cause appreciable divergence in neutral genetic markers.

  17. Uncertainty associated with selected environmental transport models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-11-01

    A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation

  18. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  19. Accounting for Regressive Eye-Movements in Models of Sentence Processing: A Reappraisal of the Selective Reanalysis Hypothesis

    Science.gov (United States)

    Mitchell, Don C.; Shen, Xingjia; Green, Matthew J.; Hodgson, Timothy L.

    2008-01-01

    When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective…

  20. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  1. Time and Place of Human Origins, the African Eve Hypothesis Examined through Modelling: Can High Schools Contribute?

    Science.gov (United States)

    Oxnard, Charles

    1994-01-01

    Studies of mitochondrial DNA imply that modern humans arose in Africa 150,000 years ago and spread throughout the world, replacing all prior human groups. But many paleontologists see continuity in human fossils on each continent and over a much longer time. Modeling may help test these alternatives. (Author/MKR)

  2. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    OpenAIRE

    Wu, Chung-Min; Hsieh, Ching-Lin; Chang, Kuei-Lun

    2013-01-01

    The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM) model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP) is then used to obtain their weights. To avoid calculation and additional pairwise compa...

  3. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  4. Modeling and Analysis of Supplier Selection Method Using ...

    African Journals Online (AJOL)

    However, in these parts of the world the application of tools and models for supplier selection problem is yet to surface and the banking and finance industry here in Ethiopia is no exception. Thus, the purpose of this research was to address supplier selection problem through modeling and application of analytical hierarchy ...

  5. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...

  6. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  7. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Science.gov (United States)

    Forti, Isabela Rodrigues Nogueira; Young, Robert John

    2016-01-01

    In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male) and Brazil (400 female and 400 male) to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (PBrazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour) it is hard to modify, hide or disguise, and it is highly polymorphic.

  8. Neuromusculoskeletal models based on the muscle synergy hypothesis for the investigation of adaptive motor control in locomotion via sensory-motor coordination.

    Science.gov (United States)

    Aoi, Shinya; Funato, Tetsuro

    2016-03-01

    Humans and animals walk adaptively in diverse situations by skillfully manipulating their complicated and redundant musculoskeletal systems. From an analysis of measured electromyographic (EMG) data, it appears that despite complicated spatiotemporal properties, muscle activation patterns can be explained by a low dimensional spatiotemporal structure. More specifically, they can be accounted for by the combination of a small number of basic activation patterns. The basic patterns and distribution weights indicate temporal and spatial structures, respectively, and the weights show the muscle sets that are activated synchronously. In addition, various locomotor behaviors have similar low dimensional structures and major differences appear in the basic patterns. These analysis results suggest that neural systems use muscle group combinations to solve motor control redundancy problems (muscle synergy hypothesis) and manipulate those basic patterns to create various locomotor functions. However, it remains unclear how the neural system controls such muscle groups and basic patterns through neuromechanical interactions in order to achieve adaptive locomotor behavior. This paper reviews simulation studies that explored adaptive motor control in locomotion via sensory-motor coordination using neuromusculoskeletal models based on the muscle synergy hypothesis. Herein, the neural mechanism in motor control related to the muscle synergy for adaptive locomotion and a potential muscle synergy analysis method including neuromusculoskeletal modeling for motor impairments and rehabilitation are discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  9. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  10. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  11. Ground-water transport model selection and evaluation guidelines

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1983-01-01

    Guidelines are being developed to assist potential users with selecting appropriate computer codes for ground-water contaminant transport modeling. The guidelines are meant to assist managers with selecting appropriate predictive models for evaluating either arid or humid low-level radioactive waste burial sites. Evaluation test cases in the form of analytical solutions to fundamental equations and experimental data sets have been identified and recommended to ensure adequate code selection, based on accurate simulation of relevant physical processes. The recommended evaluation procedures will consider certain technical issues related to the present limitations in transport modeling capabilities. A code-selection plan will depend on identifying problem objectives, determining the extent of collectible site-specific data, and developing a site-specific conceptual model for the involved hydrology. Code selection will be predicated on steps for developing an appropriate systems model. This paper will review the progress in developing those guidelines. 12 references

  12. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  13. Constraint-based model of Shewanella oneidensis MR-1 metabolism: a tool for data analysis and hypothesis generation.

    Directory of Open Access Journals (Sweden)

    Grigoriy E Pinchuk

    2010-06-01

    Full Text Available Shewanellae are gram-negative facultatively anaerobic metal-reducing bacteria commonly found in chemically (i.e., redox stratified environments. Occupying such niches requires the ability to rapidly acclimate to changes in electron donor/acceptor type and availability; hence, the ability to compete and thrive in such environments must ultimately be reflected in the organization and utilization of electron transfer networks, as well as central and peripheral carbon metabolism. To understand how Shewanella oneidensis MR-1 utilizes its resources, the metabolic network was reconstructed. The resulting network consists of 774 reactions, 783 genes, and 634 unique metabolites and contains biosynthesis pathways for all cell constituents. Using constraint-based modeling, we investigated aerobic growth of S. oneidensis MR-1 on numerous carbon sources. To achieve this, we (i used experimental data to formulate a biomass equation and estimate cellular ATP requirements, (ii developed an approach to identify cycles (such as futile cycles and circulations, (iii classified how reaction usage affects cellular growth, (iv predicted cellular biomass yields on different carbon sources and compared model predictions to experimental measurements, and (v used experimental results to refine metabolic fluxes for growth on lactate. The results revealed that aerobic lactate-grown cells of S. oneidensis MR-1 used less efficient enzymes to couple electron transport to proton motive force generation, and possibly operated at least one futile cycle involving malic enzymes. Several examples are provided whereby model predictions were validated by experimental data, in particular the role of serine hydroxymethyltransferase and glycine cleavage system in the metabolism of one-carbon units, and growth on different sources of carbon and energy. This work illustrates how integration of computational and experimental efforts facilitates the understanding of microbial metabolism at a

  14. Nonlinear Effects in Piezoelectric Transformers Explained by Thermal-Electric Model Based on a Hypothesis of Self-Heating

    DEFF Research Database (Denmark)

    Andersen, Thomas; Andersen, Michael A. E.; Thomsen, Ole Cornelius

    2012-01-01

    As the trend within power electronic still goes in the direction of higher power density and higher efficiency, it is necessary to develop new topologies and push the limit for the existing technology. Piezoelectric transformers are a fast developing technology to improve efficiency and increase...... power density of power converters. Nonlinearities in piezoelectric transformers occur when the power density is increased enough. The simple linear equations are not valid at this point and more complex theory of electro elasticity must be applied. In This work a simplified thermo-electric model...

  15. Alzheimer’s disease: the Amyloid hypothesis and the Inverse Warburg effect

    Directory of Open Access Journals (Sweden)

    Lloyd eDemetrius

    2015-01-01

    Full Text Available Epidemiological and biochemical studies show that the sporadic forms of Alzheimer’s disease (AD are characterized by the following hallmarks : (a An exponential increase with age ; (b Selective neuronal vulnerability ; (c Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD : the amyloid hypothesis (a neuron-centric mechanism and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism. We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events – mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect, and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  16. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  17. Memory in astrocytes: a hypothesis

    Directory of Open Access Journals (Sweden)

    Caudle Robert M

    2006-01-01

    Full Text Available Abstract Background Recent work has indicated an increasingly complex role for astrocytes in the central nervous system. Astrocytes are now known to exchange information with neurons at synaptic junctions and to alter the information processing capabilities of the neurons. As an extension of this trend a hypothesis was proposed that astrocytes function to store information. To explore this idea the ion channels in biological membranes were compared to models known as cellular automata. These comparisons were made to test the hypothesis that ion channels in the membranes of astrocytes form a dynamic information storage device. Results Two dimensional cellular automata were found to behave similarly to ion channels in a membrane when they function at the boundary between order and chaos. The length of time information is stored in this class of cellular automata is exponentially related to the number of units. Therefore the length of time biological ion channels store information was plotted versus the estimated number of ion channels in the tissue. This analysis indicates that there is an exponential relationship between memory and the number of ion channels. Extrapolation of this relationship to the estimated number of ion channels in the astrocytes of a human brain indicates that memory can be stored in this system for an entire life span. Interestingly, this information is not affixed to any physical structure, but is stored as an organization of the activity of the ion channels. Further analysis of two dimensional cellular automata also demonstrates that these systems have both associative and temporal memory capabilities. Conclusion It is concluded that astrocytes may serve as a dynamic information sink for neurons. The memory in the astrocytes is stored by organizing the activity of ion channels and is not associated with a physical location such as a synapse. In order for this form of memory to be of significant duration it is necessary

  18. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  19. Testing the additive versus the compensatory hypothesis of mortality from ring recovery data using a random effects model

    Directory of Open Access Journals (Sweden)

    Schaub, M.

    2004-06-01

    Full Text Available The interaction of an additional source of mortality with the underlying “natural” one strongly affects population dynamics. We propose an alternative way to test between two forms of interaction, total additivity and compensation. In contrast to existing approaches, only ring-recovery data where the cause of death of each recovered individual is known are needed. Cause-specific mortality proportions are estimated based on a multistate capture-recapture model. The hypotheses are tested by inspecting the correlation between the cause-specific mortality proportions. A variance decomposition is performed to obtain a proper estimate of the true process correlation. The estimation of the cause-specific mortality proportions is the most critical part of the approach. It works well if at least one of the two mortality rates varies across time and the two recovery rates are constant across time. We illustrate this methodology by a case study of White Storks Ciconia ciconia where we tested whether mortality induced by power line collision is additive to other forms of mortality.

  20. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  1. Modeling shape selection of buckled dielectric elastomers

    Science.gov (United States)

    Langham, Jacob; Bense, Hadrien; Barkley, Dwight

    2018-02-01

    A dielectric elastomer whose edges are held fixed will buckle, given a sufficiently applied voltage, resulting in a nontrivial out-of-plane deformation. We study this situation numerically using a nonlinear elastic model which decouples two of the principal electrostatic stresses acting on an elastomer: normal pressure due to the mutual attraction of oppositely charged electrodes and tangential shear ("fringing") due to repulsion of like charges at the electrode edges. These enter via physically simplified boundary conditions that are applied in a fixed reference domain using a nondimensional approach. The method is valid for small to moderate strains and is straightforward to implement in a generic nonlinear elasticity code. We validate the model by directly comparing the simulated equilibrium shapes with the experiment. For circular electrodes which buckle axisymetrically, the shape of the deflection profile is captured. Annular electrodes of different widths produce azimuthal ripples with wavelengths that match our simulations. In this case, it is essential to compute multiple equilibria because the first model solution obtained by the nonlinear solver (Newton's method) is often not the energetically favored state. We address this using a numerical technique known as "deflation." Finally, we observe the large number of different solutions that may be obtained for the case of a long rectangular strip.

  2. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  3. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  4. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  5. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  6. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  7. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  8. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  9. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  10. Augmented Self-Modeling as an Intervention for Selective Mutism

    Science.gov (United States)

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  11. Too much food may cause reduced growth of blue mussels (Mytilus edulis) - Test of hypothesis and new 'high Chl a BEG-model'

    Science.gov (United States)

    Larsen, Poul S.; Lüskow, Florian; Riisgård, Hans Ulrik

    2018-04-01

    Growth of the blue mussel (Mytilus edulis) is closely related to the biomass of phytoplankton (expressed as concentration of chlorophyll a, Chl a), but the effect of too much food in eutrophicated areas has so far been overlooked. The hypothesis addressed in the present study suggests that high Chl a concentrations (> about 8 μg Chl a l-1) result in reduced growth because mussels are not evolutionarily adapted to utilize such high phytoplankton concentrations and to physiologically regulate the amount of ingested food in such a way that the growth rate remains high and constant. We first make a comparison of literature values for actually measured weight-specific growth rates (μ, % d-1) of small (20 to 25 mm) M. edulis, either grown in controlled laboratory experiments or in net bags in Danish waters, as a function of Chl a. A linear increase up to about μ = 8.3% d-1 at 8.1 μg Chl a l-1 fits the "standard BEG-model" after which a marked decrease takes place, and this supports the hypothesis. A "high Chl a BEG-model", applicable to newly settled post-metamorphic and small juvenile (non-spawning) mussels in eutrophicated Danish and other temperate waters, is developed and tested, and new data from a case study in which the growth of mussels in net bags was measured along a Chl a gradient are presented. Finally, we discuss the phenomenon of reduced growth of mussels in eutrophicated areas versus a possible impact of low salinity. It is concluded that it is difficult to separate the effect of salinity from the effect of Chl a, but the present study shows that too much food may cause reduced growth of mussels in eutrophicated marine areas regardless of high or moderate salinity above about 10 psu.

  12. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  13. Target Selection Models with Preference Variation Between Offenders

    NARCIS (Netherlands)

    Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry

    2016-01-01

    Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,

  14. Akaike information criterion to select well-fit resist models

    Science.gov (United States)

    Burbine, Andrew; Fryer, David; Sturtevant, John

    2015-03-01

    In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.

  15. A risk assessment model for selecting cloud service providers

    OpenAIRE

    Cayirci, Erdal; Garaga, Alexandr; Santana de Oliveira, Anderson; Roudier, Yves

    2016-01-01

    The Cloud Adoption Risk Assessment Model is designed to help cloud customers in assessing the risks that they face by selecting a specific cloud service provider. It evaluates background information obtained from cloud customers and cloud service providers to analyze various risk scenarios. This facilitates decision making an selecting the cloud service provider with the most preferable risk profile based on aggregated risks to security, privacy, and service delivery. Based on this model we ...

  16. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  17. The Lehman Sisters Hypothesis

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2014-01-01

    markdownabstract__Abstract__ This article explores the Lehman Sisters Hypothesis. It reviews empirical literature about gender differences in behavioral, experimental, and neuro-economics as well as in other fields of behavioral research. It discusses gender differences along three dimensions of

  18. Revisiting the Dutch hypothesis

    NARCIS (Netherlands)

    Postma, Dirkje S.; Weiss, Scott T.; van den Berge, Maarten; Kerstjens, Huib A. M.; Koppelman, Gerard H.

    The Dutch hypothesis was first articulated in 1961, when many novel and advanced scientific techniques were not available, such as genomics techniques for pinpointing genes, gene expression, lipid and protein profiles, and the microbiome. In addition, computed tomographic scans and advanced analysis

  19. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  20. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  1. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  2. A durkheimian hypothesis on stress.

    Science.gov (United States)

    Mestrovic, S; Glassner, B

    1983-01-01

    Commonalities among the events that appear on life events lists and among the types of social supports which have been found to reduce the likelihood of illness are reviewed in the life events literature in an attempt to find a context within sociological theory. Social integration seems to underlie the stress-illness process. In seeking a tradition from which to understand these facts, we selected Durkheim's works in the context of the homo duplex concept wherein social integration involves the interplay of individualism and social forces. After presenting a specific hypothesis for the stress literature, the paper concludes with implications and suggestions for empirical research.

  3. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  4. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  5. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  6. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  7. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  8. Optimal experiment design for model selection in biochemical networks.

    Science.gov (United States)

    Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W

    2014-02-20

    Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.

  9. Quantile hydrologic model selection and model structure deficiency assessment : 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  10. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  11. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  12. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  13. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...

  14. The Drift Burst Hypothesis

    OpenAIRE

    Christensen, Kim; Oomen, Roel; Renò, Roberto

    2016-01-01

    The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms such as feedback trading. At a theoretical level, we show how to build drift bursts into the...

  15. Bayesian Hypothesis Testing

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H0.

  16. The interactive brain hypothesis.

    Science.gov (United States)

    Di Paolo, Ezequiel; De Jaegher, Hanne

    2012-01-01

    Enactive approaches foreground the role of interpersonal interaction in explanations of social understanding. This motivates, in combination with a recent interest in neuroscientific studies involving actual interactions, the question of how interactive processes relate to neural mechanisms involved in social understanding. We introduce the Interactive Brain Hypothesis (IBH) in order to help map the spectrum of possible relations between social interaction and neural processes. The hypothesis states that interactive experience and skills play enabling roles in both the development and current function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction. We examine the plausibility of this hypothesis against developmental and neurobiological evidence and contrast it with the widespread assumption that mindreading is crucial to all social cognition. We describe the elements of social interaction that bear most directly on this hypothesis and discuss the empirical possibilities open to social neuroscience. We propose that the link between coordination dynamics and social understanding can be best grasped by studying transitions between states of coordination. These transitions form part of the self-organization of interaction processes that characterize the dynamics of social engagement. The patterns and synergies of this self-organization help explain how individuals understand each other. Various possibilities for role-taking emerge during interaction, determining a spectrum of participation. This view contrasts sharply with the observational stance that has guided research in social neuroscience until recently. We also introduce the concept of readiness to interact to describe the practices and dispositions that are summoned in situations of social significance (even if not interactive). This latter idea links interactive factors to more classical observational scenarios.

  17. The interactive brain hypothesis

    Directory of Open Access Journals (Sweden)

    Ezequiel Alejandro Di Paolo

    2012-06-01

    Full Text Available Enactive approaches foreground the role of interpersonal interaction in explanations of social understanding. This motivates, in combination with a recent interest in neuroscientific studies involving actual interactions, the question of how interactive processes relate to neural mechanisms involved in social understanding. We introduce the Interactive Brain Hypothesis in order to help map the possible relations between interaction and neural processes. The hypothesis states that interactive experience and skills play enabling roles in both the development and current function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction. We examine the plausibility of this hypothesis against developmental and neurobiological evidence and contrast it with the widespread assumption that mindreading is crucial to all social cognition. We describe the elements of social interaction that bear most directly on this hypothesis and discuss the empirical possibilities open to social neuroscience. We propose that the link between coordination dynamics and social understanding can be best grasped by studying transitions between states of coordination. These transitions form part of the self-organisation of interaction processes that characterise the dynamics of social engagement. The patterns and synergies of this self-organisation help explain how individuals understand each other. Various possibilities for role-taking emerge during interaction, determining a spectrum of participation. This view contrasts sharply with the observational stance that has guided research in social neuroscience until recently. We also introduce the concept of readiness to interact to describe the developed practices and dispositions that are summoned in situations of social significance (even if not interactive. This latter idea could link interactive factors to more classical observational scenarios.

  18. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  19. A test of the central-marginal hypothesis using population genetics and ecological niche modelling in an endemic salamander (Ambystoma barbouri).

    Science.gov (United States)

    Micheletti, Steven J; Storfer, Andrew

    2015-03-01

    The central-marginal hypothesis (CMH) predicts that population size, genetic diversity and genetic connectivity are highest at the core and decrease near the edges of species' geographic distributions. We provide a test of the CMH using three replicated core-to-edge transects that encompass nearly the entire geographic range of the endemic streamside salamander (Ambystoma barbouri). We confirmed that the mapped core of the distribution was the most suitable habitat using ecological niche modelling (ENM) and via genetic estimates of effective population sizes. As predicted by the CMH, we found statistical support for decreased genetic diversity, effective population size and genetic connectivity from core to edge in western and northern transects, yet not along a southern transect. Based on our niche model, habitat suitability is lower towards the southern range edge, presumably leading to conflicting core-to-edge genetic patterns. These results suggest that multiple processes may influence a species' distribution based on the heterogeneity of habitat across a species' range and that replicated sampling may be needed to accurately test the CMH. Our work also emphasizes the importance of identifying the geographic range core with methods other than using the Euclidean centre on a map, which may help to explain discrepancies among other empirical tests of the CMH. Assessing core-to-edge population genetic patterns across an entire species' range accompanied with ENM can inform our general understanding of the mechanisms leading to species' geographic range limits. © 2015 John Wiley & Sons Ltd.

  20. Is the fluid mosaic (and the accompanying raft hypothesis a suitable model to describe fundamental features of biological membranes? What may be missing?

    Directory of Open Access Journals (Sweden)

    Luis Alberto Bagatolli

    2013-11-01

    Full Text Available The structure, dynamics, and stability of lipid bilayers are controlled by thermodynamic forces, leading to overall tensionless membranes with a distinct lateral organization and a conspicuous lateral pressure profile. Bilayers are also subject to built-in curvature-stress instabilities that may be released locally or globally in terms of morphological changes leading to the formation of non-lamellar and curved structures. A key controller of the bilayer’s propensity to form curved structures is the average molecular shape of the different lipid molecules. Via the curvature stress, molecular shape mediates a coupling to membrane-protein function and provides a set of physical mechanisms for formation of lipid domains and laterally differentiated regions in the plane of the membrane. Unfortunately, these relevant physical features of membranes are often ignored in the most popular models for biological membranes. Results from a number of experimental and theoretical studies emphasize the significance of these fundamental physical properties and call for a refinement of the fluid mosaic model (and the accompanying raft hypothesis.

  1. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  2. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  3. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  4. Modeling quality attributes and metrics for web service selection

    Science.gov (United States)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  5. Dream interpretation, affect, and the theory of neuronal group selection: Freud, Winnicott, Bion, and Modell.

    Science.gov (United States)

    Shields, Walker

    2006-12-01

    The author uses a dream specimen as interpreted during psychoanalysis to illustrate Modell's hypothesis that Edelman's theory of neuronal group selection (TNGS) may provide a valuable neurobiological model for Freud's dynamic unconscious, imaginative processes in the mind, the retranscription of memory in psychoanalysis, and intersubjective processes in the analytic relationship. He draws parallels between the interpretation of the dream material with keen attention to affect-laden meanings in the evolving analytic relationship in the domain of psychoanalysis and the principles of Edelman's TNGS in the domain of neurobiology. The author notes how this correlation may underscore the importance of dream interpretation in psychoanalysis. He also suggests areas for further investigation in both realms based on study of their interplay.

  6. [On selection criteria in spatially distributed models of competition].

    Science.gov (United States)

    Il'ichev, V G; Il'icheva, O A

    2014-01-01

    Discrete models of competitors (initial population and mutants) are considered in which reproduction is set by increasing and concave function, and migration in the space consisting of a set of areas, is described by a Markov matrix. This allows the use of the theory of monotonous operators to study problems of selection, coexistence and stability. It is shown that the higher is the number of areas, more and more severe constraints of selective advantage to initial population are required.

  7. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  8. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  9. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  10. Genetic signatures of natural selection in a model invasive ascidian

    Science.gov (United States)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-03-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.

  11. Ecohydrological model parameter selection for stream health evaluation.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  12. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  13. Selecting an appropriate genetic evaluation model for selection in a developing dairy sector

    NARCIS (Netherlands)

    McGill, D.M.; Mulder, H.A.; Thomson, P.C.; Lievaart, J.J.

    2014-01-01

    This study aimed to identify genetic evaluation models (GEM) to accurately select cattle for milk production when only limited data are available. It is based on a data set from the Pakistani Sahiwal progeny testing programme which includes records from five government herds, each consisting of 100

  14. Physiologic time: A hypothesis

    Science.gov (United States)

    West, Damien; West, Bruce J.

    2013-06-01

    The scaling of respiratory metabolism with body size in animals is considered by many to be a fundamental law of nature. One apparent consequence of this law is the scaling of physiologic time with body size, implying that physiologic time is separate and distinct from clock time. Physiologic time is manifest in allometry relations for lifespans, cardiac cycles, blood volume circulation, respiratory cycle, along with a number of other physiologic phenomena. Herein we present a theory of physiologic time that explains the allometry relation between time and total body mass averages as entailed by the hypothesis that the fluctuations in the total body mass are described by a scaling probability density.

  15. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    Science.gov (United States)

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2014-12-06

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  16. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  17. Testing the status-legitimacy hypothesis: A multilevel modeling approach to the perception of legitimacy in income distribution in 36 nations.

    Science.gov (United States)

    Caricati, Luca

    2017-01-01

    The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.

  18. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  19. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  20. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  1. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...... many issues that deserve attention. This thesis investigates how sample selection can affect estimation of discrete choice models and how taste correlation should be incorporated into applied mixed logit estimation. Sampling in transport modelling is often based on an observed trip. This may cause...... a sample to be choice-based or governed by a self-selection mechanism. In both cases, there is a possibility that sampling affects the estimation of a population model. It was established in the seventies how choice-based sampling affects the estimation of multinomial logit models. The thesis examines...

  2. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  3. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  5. The Selection of ARIMA Models with or without Regressors

    DEFF Research Database (Denmark)

    Johansen, Søren; Riani, Marco; Atkinson, Anthony C.

    We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum o...

  6. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Selecting candidate predictor variables for the modelling of post-discharge mortality from sepsis: a protocol development project. Afri. Health Sci. .... Initial list of candidate predictor variables, N=17. Clinical. Laboratory. Social/Demographic. Vital signs (HR, RR, BP, T). Hemoglobin. Age. Oxygen saturation. Blood culture. Sex.

  7. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  8. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  9. Hypothesis testing in hydrology: Theory and practice

    Science.gov (United States)

    Kirchner, James; Pfister, Laurent

    2017-04-01

    Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  10. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  11. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  12. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  13. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  14. Bayesian Variable Selection on Model Spaces Constrained by Heredity Conditions.

    Science.gov (United States)

    Taylor-Rodriguez, Daniel; Womack, Andrew; Bliznyuk, Nikolay

    2016-01-01

    This paper investigates Bayesian variable selection when there is a hierarchical dependence structure on the inclusion of predictors in the model. In particular, we study the type of dependence found in polynomial response surfaces of orders two and higher, whose model spaces are required to satisfy weak or strong heredity conditions. These conditions restrict the inclusion of higher-order terms depending upon the inclusion of lower-order parent terms. We develop classes of priors on the model space, investigate their theoretical and finite sample properties, and provide a Metropolis-Hastings algorithm for searching the space of models. The tools proposed allow fast and thorough exploration of model spaces that account for hierarchical polynomial structure in the predictors and provide control of the inclusion of false positives in high posterior probability models.

  15. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  16. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  17. A New Approach to Model Verification, Falsification and Selection

    Directory of Open Access Journals (Sweden)

    Andrew J. Buck

    2015-06-01

    Full Text Available This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.

  18. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  19. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  20. Broken selection rule in the quantum Rabi model.

    Science.gov (United States)

    Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E

    2016-06-07

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.

  1. Models of cultural niche construction with selection and assortative mating.

    Science.gov (United States)

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  2. Models of cultural niche construction with selection and assortative mating.

    Directory of Open Access Journals (Sweden)

    Nicole Creanza

    Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  3. On the two steps threshold selection for over-threshold modelling of extreme events

    Science.gov (United States)

    Bernardara, Pietro; Mazas, Franck; Weiss, Jerome; Andreewsky, Marc; Kergadallan, Xavier; Benoit, Michel; Hamm, Luc

    2013-04-01

    The estimation of the probability of occurrence of extreme events is traditionally achieved by fitting a probability distribution on a sample of extreme observations. In particular, the extreme value theory (EVT) states that values exceeding a given threshold converge through a Generalized Pareto Distribution (GPD) if the original sample is composed of independent and identically distributed values. However, the temporal series of sea and ocean variables usually show strong temporal autocorrelation. Traditionally, in order to select independent events for the following statistical analysis, the concept of a physical threshold is introduced: events that excess that threshold are defined as "extreme events". This is the so-called "Peak Over a Threshold (POT)" sampling, widely spread in the literature and currently used for engineering applications among many others. In the past, the threshold for the statistical sampling of extreme values asymptotically convergent toward GPD and the threshold for the physical selection of independent extreme events were confused, as the same threshold was used for both sampling data and to meet the hypothesis of extreme value convergence, leading to some incoherencies. In particular, if the two steps are performed simultaneously, the number of peaks over the threshold can increase but also decrease when the threshold decreases. This is logic in a physical point of view, since the definition of the sample of "extreme events" changes, but is not coherent with the statistical theory. We introduce a two-steps threshold selection for over-threshold modelling, aiming to discriminate (i) a physical threshold for the selection of extreme and independent events, and (ii) a statistical threshold for the optimization of the coherence with the hypothesis of the EVT. The former is a physical events identification procedure (also called "declustering") aiming at selecting independent extreme events. The latter is a purely statistical optimization

  4. Advanced Diffusion-weighted Imaging Modeling for Prostate Cancer Characterization: Correlation with Quantitative Histopathologic Tumor Tissue Composition-A Hypothesis-generating Study.

    Science.gov (United States)

    Hectors, Stefanie J; Semaan, Sahar; Song, Christopher; Lewis, Sara; Haines, George K; Tewari, Ashutosh; Rastinehad, Ardeshir R; Taouli, Bachir

    2018-03-01

    Purpose To correlate quantitative diffusion-weighted imaging (DWI) parameters derived from conventional monoexponential DWI, stretched exponential DWI, diffusion kurtosis imaging (DKI), and diffusion-tensor imaging (DTI) with quantitative histopathologic tumor tissue composition in prostate cancer in a preliminary hypothesis-generating study. Materials and Methods This retrospective institutional review board-approved study included 24 patients with prostate cancer (mean age, 63 years) who underwent magnetic resonance (MR) imaging, including high-b-value DWI and DTI at 3.0 T, before prostatectomy. The following parameters were calculated in index tumors and nontumoral peripheral zone (PZ): apparent diffusion coefficient (ADC) obtained with monoexponential fit (ADC ME ), ADC obtained with stretched exponential modeling (ADC SE ), anomalous exponent (α) obtained at stretched exponential DWI, ADC obtained with DKI modeling (ADC DKI ), kurtosis with DKI, ADC obtained with DTI (ADC DTI ), and fractional anisotropy (FA) at DTI. Parameters in prostate cancer and PZ were compared by using paired Student t tests. Pearson correlations between tumor DWI and quantitative histologic parameters (nuclear, cytoplasmic, cellular, stromal, luminal fractions) were determined. Results All DWI parameters were significantly different between prostate cancer and PZ (P < .012). ADC ME , ADC SE , and ADC DKI all showed significant negative correlation with cytoplasmic and cellular fractions (r = -0.546 to -0.435; P < .034) and positive correlation with stromal fractions (r = 0.619-0.669; P < .001). ADC DTI and FA showed correlation only with stromal fraction (r = 0.512 and -0.413, respectively; P < .045). α did not correlate with histologic parameters, whereas kurtosis showed significant correlations with histopathologic parameters (r = 0.487, 0.485, -0.422 for cytoplasmic, cellular, and stromal fractions, respectively; P < .040). Conclusion Advanced DWI methods showed significant

  5. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities

  6. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  7. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  8. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia...

  9. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...... the diffusion of neutral and ionic molecules across biomembranes, protonation to mono- or bivalent ions, adsorption to lipids, and electrical attraction or repulsion. Based on simulation results, high and selective accumulation in lysosomes was found for weak mono- and bivalent bases with intermediate to high...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  10. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    Science.gov (United States)

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  12. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  13. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  14. Models of speciation by sexual selection on polygenic traits

    OpenAIRE

    Lande, Russell

    1981-01-01

    The joint evolution of female mating preferences and secondary sexual characters of males is modeled for polygamous species in which males provide only genetic material to the next generation and females have many potential mates to choose among. Despite stabilizing natural selection on males, various types of mating preferences may create a runaway process in which the outcome of phenotypic evolution depends critically on the genetic variation parameters and initial conditions of a populatio...

  15. A Model of Social Selection and Successful Altruism

    Science.gov (United States)

    1989-10-07

    D., The evolution of social behavior. Annual Reviews of Ecological Systems, 5:325-383 (1974). 2. Dawkins , R., The selfish gene . Oxford: Oxford...alive and well. it will be important to re- examine this striking historical experience,-not in terms o, oversimplified models of the " selfish gene ," but...Darwinian Analysis The acceptance by many modern geneticists of the axiom that the basic unit of selection Is the " selfish gene " quickly led to the

  16. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  17. A decision model for energy resource selection in China

    International Nuclear Information System (INIS)

    Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting

    2010-01-01

    This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.

  18. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  19. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  20. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  1. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  2. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  3. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  4. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  5. Selecting global climate models for regional climate change studies.

    Science.gov (United States)

    Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J

    2009-05-26

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.

  6. Selecting an Appropriate Upscaled Reservoir Model Based on Connectivity Analysis

    Directory of Open Access Journals (Sweden)

    Preux Christophe

    2016-09-01

    Full Text Available Reservoir engineers aim to build reservoir models to investigate fluid flows within hydrocarbon reservoirs. These models consist of three-dimensional grids populated by petrophysical properties. In this paper, we focus on permeability that is known to significantly influence fluid flow. Reservoir models usually encompass a very large number of fine grid blocks to better represent heterogeneities. However, performing fluid flow simulations for such fine models is extensively CPU-time consuming. A common practice consists in converting the fine models into coarse models with less grid blocks: this is the upscaling process. Many upscaling methods have been proposed in the literature that all lead to distinct coarse models. The problem is how to choose the appropriate upscaling method. Various criteria have been established to evaluate the information loss due to upscaling, but none of them investigate connectivity. In this paper, we propose to first perform a connectivity analysis for the fine and candidate coarse models. This makes it possible to identify shortest paths connecting wells. Then, we introduce two indicators to quantify the length and trajectory mismatch between the paths for the fine and the coarse models. The upscaling technique to be recommended is the one that provides the coarse model for which the shortest paths are the closest to the shortest paths determined for the fine model, both in terms of length and trajectory. Last, the potential of this methodology is investigated from two test cases. We show that the two indicators help select suitable upscaling techniques as long as gravity is not a prominent factor that drives fluid flows.

  7. Bioeconomic model and selection indices in Aberdeen Angus cattle.

    Science.gov (United States)

    Campos, G S; Braccini Neto, J; Oaigen, R P; Cardoso, F F; Cobuci, J A; Kern, E L; Campos, L T; Bertoli, C D; McManus, C M

    2014-08-01

    A bioeconomic model was developed to calculate economic values for biological traits in full-cycle production systems and propose selection indices based on selection criteria used in the Brazilian Aberdeen Angus genetic breeding programme (PROMEBO). To assess the impact of changes in the performance of the traits on the profit of the production system, the initial values ​​of the traits were increased by 1%. The economic values for number of calves weaned (NCW) and slaughter weight (SW) were, respectively, R$ 6.65 and R$ 1.43/cow/year. The selection index at weaning showed a 44.77% emphasis on body weight, 14.24% for conformation, 30.36% for early maturing and 10.63% for muscle development. The eighteen-month index showed emphasis of 77.61% for body weight, 4.99% for conformation, 11.09% for early maturing, 6.10% for muscle development and 0.22% for scrotal circumference. NCW showed highest economic impact, and SW had important positive effect on the economics of the production system. The selection index proposed can be used by breeders and should contribute to greater profitability. © 2014 Blackwell Verlag GmbH.

  8. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though...

  9. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  10. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  11. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  12. Proposition of a multicriteria model to select logistics services providers

    Directory of Open Access Journals (Sweden)

    Miriam Catarina Soares Aharonovitz

    2014-06-01

    Full Text Available This study aims to propose a multicriteria model to select logistics service providers by the development of a decision tree. The methodology consists of a survey, which resulted in a sample of 181 responses. The sample was analyzed using statistic methods, descriptive statistics among them, multivariate analysis, variance analysis, and parametric tests to compare means. Based on these results, it was possible to obtain the decision tree and information to support the multicriteria analysis. The AHP (Analytic Hierarchy Process was applied to determine the data influence and thus ensure better consistency in the analysis. The decision tree categorizes the criteria according to the decision levels (strategic, tactical and operational. Furthermore, it allows to generically evaluate the importance of each criterion in the supplier selection process from the point of view of logistics services contractors.

  13. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  14. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  15. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  16. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  17. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  18. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  19. Variable Selection in Model-based Clustering: A General Variable Role Modeling

    OpenAIRE

    Maugis, Cathy; Celeux, Gilles; Martin-Magniette, Marie-Laure

    2008-01-01

    The currently available variable selection procedures in model-based clustering assume that the irrelevant clustering variables are all independent or are all linked with the relevant clustering variables. We propose a more versatile variable selection model which describes three possible roles for each variable: The relevant clustering variables, the irrelevant clustering variables dependent on a part of the relevant clustering variables and the irrelevant clustering variables totally indepe...

  20. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  1. Model to Estimate Monthly Time Horizons for Application of DEA in Selection of Stock Portfolio and for Maintenance of the Selected Portfolio

    Directory of Open Access Journals (Sweden)

    José Claudio Isaias

    2015-01-01

    Full Text Available In the selecting of stock portfolios, one type of analysis that has shown good results is Data Envelopment Analysis (DEA. It, however, has been shown to have gaps regarding its estimates of monthly time horizons of data collection for the selection of stock portfolios and of monthly time horizons for the maintenance of a selected portfolio. To better estimate these horizons, this study proposes a model of mathematical programming binary of minimization of square errors. This model is the paper’s main contribution. The model’s results are validated by simulating the estimated annual return indexes of a portfolio that uses both horizons estimated and of other portfolios that do not use these horizons. The simulation shows that portfolios with both horizons estimated have higher indexes, on average 6.99% per year. The hypothesis tests confirm the statistically significant superiority of the results of the proposed mathematical model’s indexes. The model’s indexes are also compared with portfolios that use just one of the horizons estimated; here the indexes of the dual-horizon portfolios outperform the single-horizon portfolios, though with a decrease in percentage of statistically significant superiority.

  2. Direction selectivity in a model of the starburst amacrine cell.

    Science.gov (United States)

    Tukker, John J; Taylor, W Rowland; Smith, Robert G

    2004-01-01

    The starburst amacrine cell (SBAC), found in all mammalian retinas, is thought to provide the directional inhibitory input recorded in On-Off direction-selective ganglion cells (DSGCs). While voltage recordings from the somas of SBACs have not shown robust direction selectivity (DS), the dendritic tips of these cells display direction-selective calcium signals, even when gamma-aminobutyric acid (GABAa,c) channels are blocked, implying that inhibition is not necessary to generate DS. This suggested that the distinctive morphology of the SBAC could generate a DS signal at the dendritic tips, where most of its synaptic output is located. To explore this possibility, we constructed a compartmental model incorporating realistic morphological structure, passive membrane properties, and excitatory inputs. We found robust DS at the dendritic tips but not at the soma. Two-spot apparent motion and annulus radial motion produced weak DS, but thin bars produced robust DS. For these stimuli, DS was caused by the interaction of a local synaptic input signal with a temporally delayed "global" signal, that is, an excitatory postsynaptic potential (EPSP) that spread from the activated inputs into the soma and throughout the dendritic tree. In the preferred direction the signals in the dendritic tips coincided, allowing summation, whereas in the null direction the local signal preceded the global signal, preventing summation. Sine-wave grating stimuli produced the greatest amount of DS, especially at high velocities and low spatial frequencies. The sine-wave DS responses could be accounted for by a simple mathematical model, which summed phase-shifted signals from soma and dendritic tip. By testing different artificial morphologies, we discovered DS was relatively independent of the morphological details, but depended on having a sufficient number of inputs at the distal tips and a limited electrotonic isolation. Adding voltage-gated calcium channels to the model showed that their

  3. Parametric pattern selection in a reaction-diffusion model.

    Directory of Open Access Journals (Sweden)

    Michael Stich

    Full Text Available We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.

  4. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  5. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  6. Is the Aluminum Hypothesis Dead?

    Science.gov (United States)

    2014-01-01

    The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed with concern by some of the public. This review article discusses reasons that mainstream science has largely abandoned the Aluminum Hypothesis and explores a possible reason for some in the general public continuing to view aluminum with mistrust. PMID:24806729

  7. Modeling Knowledge Resource Selection in Expert Librarian Search

    Science.gov (United States)

    KAUFMAN, David R.; MEHRYAR, Maryam; CHASE, Herbert; HUNG, Peter; CHILOV, Marina; JOHNSON, Stephen B.; MENDONCA, Eneida

    2011-01-01

    Providing knowledge at the point of care offers the possibility for reducing error and improving patient outcomes. However, the vast majority of physician’s information needs are not met in a timely fashion. The research presented in this paper models an expert librarian’s search strategies as it pertains to the selection and use of various electronic information resources. The 10 searches conducted by the librarian to address physician’s information needs, varied in terms of complexity and question type. The librarian employed a total of 10 resources and used as many as 7 in a single search. The longer term objective is to model the sequential process in sufficient detail as to be able to contribute to the development of intelligent automated search agents. PMID:19380912

  8. Cliff-edge model of obstetric selection in humans.

    Science.gov (United States)

    Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela

    2016-12-20

    The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.

  9. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  10. Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting

    Directory of Open Access Journals (Sweden)

    Bijay Neupane

    2017-01-01

    Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.

  11. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  12. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study.

    Science.gov (United States)

    Austin, Peter C

    2008-10-01

    Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

  13. Hypothesis Validity of Clinical Research.

    Science.gov (United States)

    Wampold, Bruce E.; And Others

    1990-01-01

    Describes hypothesis validity as extent to which research results reflect theoretically derived predictions about relations between or among constructs. Discusses role of hypotheses in theory testing. Presents four threats to hypothesis validity: (1) inconsequential research hypotheses; (2) ambiguous research hypotheses; (3) noncongruence of…

  14. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  15. The pyrophilic primate hypothesis.

    Science.gov (United States)

    Parker, Christopher H; Keefe, Earl R; Herzog, Nicole M; O'connell, James F; Hawkes, Kristen

    2016-01-01

    Members of genus Homo are the only animals known to create and control fire. The adaptive significance of this unique behavior is broadly recognized, but the steps by which our ancestors evolved pyrotechnic abilities remain unknown. Many hypotheses attempting to answer this question attribute hominin fire to serendipitous, even accidental, discovery. Using recent paleoenvironmental reconstructions, we present an alternative scenario in which, 2 to 3 million years ago in tropical Africa, human fire dependence was the result of adapting to progressively fire-prone environments. The extreme and rapid fluctuations between closed canopy forests, woodland, and grasslands that occurred in tropical Africa during that time, in conjunction with reductions in atmospheric carbon dioxide levels, changed the fire regime of the region, increasing the occurrence of natural fires. We use models from optimal foraging theory to hypothesize benefits that this fire-altered landscape provided to ancestral hominins and link these benefits to steps that transformed our ancestors into a genus of active pyrophiles whose dependence on fire for survival contributed to its rapid expansion out of Africa. © 2016 Wiley Periodicals, Inc.

  16. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  18. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation......We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...

  19. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  20. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  1. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  2. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  3. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  4. Estimating a dynamic model of sex selection in China.

    Science.gov (United States)

    Ebenstein, Avraham

    2011-05-01

    High ratios of males to females in China, which have historically concerned researchers (Sen 1990), have increased in the wake of China's one-child policy, which began in 1979. Chinese policymakers are currently attempting to correct the imbalance in the sex ratio through initiatives that provide financial compensation to parents with daughters. Other scholars have advocated a relaxation of the one-child policy to allow more parents to have a son without engaging in sex selection. In this article, I present a model of fertility choice when parents have access to a sex-selection technology and face a mandated fertility limit. By exploiting variation in fines levied in China for unsanctioned births, I estimate the relative price of a son and daughter for mothers observed in China's census data (1982-2000). I find that a couple's first son is worth 1.42 years of income more than a first daughter, and the premium is highest among less-educated mothers and families engaged in agriculture. Simulations indicate that a subsidy of 1 year of income to families without a son would reduce the number of "missing girls" by 67% but impose an annual cost of 1.8% of Chinese gross domestic product (GDP). Alternatively, a three-child policy would reduce the number of "missing girls" by 56% but increase the fertility rate by 35%.

  5. Model catalysis by size-selected cluster deposition

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  6. Analytical Modelling Of Milling For Tool Design And Selection

    Science.gov (United States)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-05-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools.

  7. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  8. Selection of hydrologic modeling approaches for climate change assessment: A comparison of model scale and structures

    Science.gov (United States)

    Surfleet, Christopher G.; Tullos, Desirèe; Chang, Heejun; Jung, Il-Won

    2012-09-01

    SummaryA wide variety of approaches to hydrologic (rainfall-runoff) modeling of river basins confounds our ability to select, develop, and interpret models, particularly in the evaluation of prediction uncertainty associated with climate change assessment. To inform the model selection process, we characterized and compared three structurally-distinct approaches and spatial scales of parameterization to modeling catchment hydrology: a large-scale approach (using the VIC model; 671,000 km2 area), a basin-scale approach (using the PRMS model; 29,700 km2 area), and a site-specific approach (the GSFLOW model; 4700 km2 area) forced by the same future climate estimates. For each approach, we present measures of fit to historic observations and predictions of future response, as well as estimates of model parameter uncertainty, when available. While the site-specific approach generally had the best fit to historic measurements, the performance of the model approaches varied. The site-specific approach generated the best fit at unregulated sites, the large scale approach performed best just downstream of flood control projects, and model performance varied at the farthest downstream sites where streamflow regulation is mitigated to some extent by unregulated tributaries and water diversions. These results illustrate how selection of a modeling approach and interpretation of climate change projections require (a) appropriate parameterization of the models for climate and hydrologic processes governing runoff generation in the area under study, (b) understanding and justifying the assumptions and limitations of the model, and (c) estimates of uncertainty associated with the modeling approach.

  9. Evaluating experimental design for soil-plant model selection with Bayesian model averaging

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian

    2013-04-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), the model weights in BMA are perceived as uncertain quantities with assigned probability distributions that narrow down as more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. The models were then conditioned on field measurements of soil moisture, leaf-area index (LAI), and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at the Nellingen site in Southwestern Germany. Following our new method, we derived the BMA model weights (and their distributions) when using all data or different subsets thereof. We discuss to which degree the posterior BMA mean outperformed the prior BMA

  10. Model Selection in the Analysis of Photoproduction Data

    Science.gov (United States)

    Landay, Justin

    2017-01-01

    Scattering experiments provide one of the most powerful and useful tools for probing matter to better understand its fundamental properties governed by the strong interaction. As the spectroscopy of the excited states of nucleons enters a new era of precision ushered in by improved experiments at Jefferson Lab and other facilities around the world, traditional partial-wave analysis methods must be adjusted accordingly. In this poster, we present a rigorous set of statistical tools and techniques that we implemented; most notably, the LASSO method, which serves for the selection of the simplest model, allowing us to avoid over fitting. In the case of establishing the spectrum of exited baryons, it avoids overpopulation of the spectrum and thus the occurrence of false-positives. This is a prerequisite to reliably compare theories like lattice QCD or quark models to experiments. Here, we demonstrate the principle by simultaneously fitting three observables in neutral pion photo-production, such as the differential cross section, beam asymmetry and target polarization across thousands of data points. Other authors include Michael Doring, Bin Hu, and Raquel Molina.

  11. Testing One Hypothesis Multiple times

    OpenAIRE

    Algeri, Sara; van Dyk, David A.

    2017-01-01

    Hypothesis testing in presence of a nuisance parameter that is only identifiable under the alternative is challenging in part because standard asymptotic results (e.g., Wilks theorem for the generalized likelihood ratio test) do not apply. Several solutions have been proposed in the statistical literature and their practical implementation often reduces the problem into one of Testing One Hypothesis Multiple times (TOHM). Specifically, a fine discretization of the space of the non-identifiabl...

  12. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  13. Sleep memory processing: the sequential hypothesis.

    Science.gov (United States)

    Giuditta, Antonio

    2014-01-01

    According to the sequential hypothesis (SH) memories acquired during wakefulness are processed during sleep in two serial steps respectively occurring during slow wave sleep (SWS) and rapid eye movement (REM) sleep. During SWS memories to be retained are distinguished from irrelevant or competing traces that undergo downgrading or elimination. Processed memories are stored again during REM sleep which integrates them with preexisting memories. The hypothesis received support from a wealth of EEG, behavioral, and biochemical analyses of trained rats. Further evidence was provided by independent studies of human subjects. SH basic premises, data, and interpretations have been compared with corresponding viewpoints of the synaptic homeostatic hypothesis (SHY). Their similarities and differences are presented and discussed within the framework of sleep processing operations. SHY's emphasis on synaptic renormalization during SWS is acknowledged to underline a key sleep effect, but this cannot marginalize sleep's main role in selecting memories to be retained from downgrading traces, and in their integration with preexisting memories. In addition, SHY's synaptic renormalization raises an unsolved dilemma that clashes with the accepted memory storage mechanism exclusively based on modifications of synaptic strength. This difficulty may be bypassed by the assumption that SWS-processed memories are stored again by REM sleep in brain subnuclear quantum particles. Storing of memories in quantum particles may also occur in other vigilance states. Hints are provided on ways to subject the quantum hypothesis to experimental tests.

  14. Sleep memory processing: the sequential hypothesis

    Directory of Open Access Journals (Sweden)

    Antonio eGiuditta

    2014-12-01

    Full Text Available According to the sequential hypothesis (SH memories acquired during wakefulness are processed during sleep in two serial steps respectively occurring during slow wave sleep (SWS and REM sleep. During SWS memories to be retained are distinguished from irrelevant or competing traces that undergo downgrading or elimination. Processed memories are stored again during REM sleep which integrates them with preexisting memories. The hypothesis received support from a wealth of EEG, behavioral, and biochemical analyses of trained rats. Further evidence was provided by independent studies of human subjects. SH basic premises, data, and interpretations have been compared with corresponding viewpoints of the synaptic homeostatic hypothesis (SHY. Their similarities and differences are presented and discussed within the framework of sleep processing operations. SHY’s emphasis on synaptic renormalization during SWS is acknowledged to underline a key sleep effect, but this cannot marginalize sleep’s main role in selecting memories to be retained from downgrading traces, and in their integration with preexisting memories. In addition, SHY’s synaptic renormalization raises an unsolved dilemma that clashes with the accepted memory storage mechanism exclusively based on modifications of synaptic strength. This difficulty may be bypassed by the assumption that SWS-processed memories are stored again by REM sleep in brain subnuclear quantum particles. Storing of memories in quantum particles may also occur in other vigilance states. Hints are provided on ways to subject the quantum hypothesis to experimental tests.

  15. A simple model of group selection that cannot be analyzed with inclusive fitness

    NARCIS (Netherlands)

    van Veelen, M.; Luo, S.; Simon, B.

    2014-01-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models,

  16. Can Methicillin-resistant Staphylococcus aureus Silently Travel From the Gut to the Wound and Cause Postoperative Infection? Modeling the "Trojan Horse Hypothesis".

    Science.gov (United States)

    Krezalek, Monika A; Hyoju, Sanjiv; Zaborin, Alexander; Okafor, Emeka; Chandrasekar, Laxmi; Bindokas, Vitas; Guyton, Kristina; Montgomery, Christopher P; Daum, Robert S; Zaborina, Olga; Boyle-Vavra, Susan; Alverdy, John C

    2018-04-01

    To determine whether intestinal colonization with methicillin-resistant Staphylococcus aureus (MRSA) can be the source of surgical site infections (SSIs). We hypothesized that gut-derived MRSA may cause SSIs via mechanisms in which circulating immune cells scavenge MRSA from the gut, home to surgical wounds, and cause infection (Trojan Horse Hypothesis). MRSA gut colonization was achieved by disrupting the microbiota with antibiotics, imposing a period of starvation and introducing MRSA via gavage. Next, mice were subjected to a surgical injury (30% hepatectomy) and rectus muscle injury and ischemia before skin closure. All wounds were cultured before skin closure. To control for postoperative wound contamination, reiterative experiments were performed in mice in which the closed wound was painted with live MRSA for 2 consecutive postoperative days. To rule out extracellular bacteremia as a cause of wound infection, MRSA was injected intravenously in mice subjected to rectus muscle ischemia and injury. All wound cultures were negative before skin closure, ruling out intraoperative contamination. Out of 40 mice, 4 (10%) developed visible abscesses. Nine mice (22.5%) had MRSA positive cultures of the rectus muscle without visible abscesses. No SSIs were observed in mice injected intravenously with MRSA. Wounds painted with MRSA after closure did not develop infections. Circulating neutrophils from mice captured by flow cytometry demonstrated MRSA in their cytoplasm. Immune cells as Trojan horses carrying gut-derived MRSA may be a plausible mechanism of SSIs in the absence of direct contamination.

  17. Modelling and analyses do not support the hypothesis that charging by power-line corona increases lung deposition of airborne particles

    International Nuclear Information System (INIS)

    Jeffers, D.

    2007-01-01

    The National Radiological Protection Board's advisory Group on Non-ionising Radiation has recommended further study on the effects of electric charge on the deposition of 0.005-1 μm particles in the lung. Estimates have been made regarding the integrated ion exposure within the corona plume generated by a power line and by ionizers in an intensive care unit. Changes in the charge state of particles with sizes in the range 0.02-13 μm have been calculated for these exposures. The corona plume increases the charge per particle of 0.02 and 0.1 μm particles by the order of 0.1. The ionizers in the intensive care unit produced negative ions-as do power lines under most conditions. Bacteria can carry in the order of 1000 charges (of either sign) and it is shown that the repulsion between such a negatively charged bacterium and negative ions prevents further ion deposition by diffusion charging. Positively charged bacteria can, however, be discharged by the ions which are attracted to them. The data provide no support for the hypothesis that ion exposure, at the levels considered, can increase deposition in the lung. (authors)

  18. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  19. Information footprint of different ecohydrological data sources: using multi-objective calibration of a physically-based model as hypothesis testing

    Science.gov (United States)

    Kuppel, S.; Soulsby, C.; Maneta, M. P.; Tetzlaff, D.

    2017-12-01

    The utility of field measurements to help constrain the model solution space and identify feasible model configurations has been an increasingly central issue in hydrological model calibration. Sufficiently informative observations are necessary to ensure that the goodness of model-data fit attained effectively translates into more physically-sound information for the internal model parameters, as a basis for model structure evaluation. Here we assess to which extent the diversity of information content can inform on the suitability of a complex, process-based ecohydrological model to simulate key water flux and storage dynamics at a long-term research catchment in the Scottish Highlands. We use the fully-distributed ecohydrological model EcH2O, calibrated against long-term datasets that encompass hydrologic and energy exchanges and ecological measurements: stream discharge, soil moisture, net radiation above canopy, and pine stand transpiration. Diverse combinations of these constraints were applied using a multi-objective cost function specifically designed to avoid compensatory effects between model-data metrics. Results revealed that calibration against virtually all datasets enabled the model to reproduce streamflow reasonably well. However, parameterizing the model to adequately capture local flux and storage dynamics, such as soil moisture or transpiration, required calibration with specific observations. This indicates that the footprint of the information contained in observations varies for each type of dataset, and that a diverse database informing about the different compartments of the domain, is critical to test hypotheses of catchment function and identify a consistent model parameterization. The results foster confidence in using EcH2O to help understanding current and future ecohydrological couplings in Northern catchments.

  20. Superallowed 0+→0+ nuclear β decays: A critical survey with tests of the conserved vector current hypothesis and the standard model

    International Nuclear Information System (INIS)

    Hardy, J.C.; Towner, I.S.

    2005-01-01

    A complete and critical survey is presented of all half-life, decay-energy, and branching-ratio measurements related to 20 superallowed 0 + →0 + decays; no measurements are ignored, although some are rejected for cause and others updated. A new calculation of the statistical rate function f is described and experimental ft values determined. The associated theoretical corrections needed to convert these results into 'corrected' Ft values are discussed, and careful attention is paid to the origin and magnitude of their uncertainties. As an exacting confirmation of the conserved vector current hypothesis, the corrected Ft values are seen to be constant to three parts in 10 4 . These data are also used to set a new limit on any possible scalar interaction (assuming maximum parity violation) of C S /C V =-(0.00005±0.00130). The average Ft value obtained from the survey, when combined with the muon lifetime, yields the up-down quark-mixing element of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, V ud =0.9738±0.0004, and the unitarity test on the top row of the matrix becomes vertical bar V ud vertical bar 2 + vertical bar V us vertical bar 2 + vertical bar V ub vertical bar 2 =0.9966±0.0014 using the Particle Data Group's currently recommended values for V us and V ub . If V us comes instead from two recent results on K e3 decay, the unitarity sum becomes 0.9996(11). Either result can be expressed in terms of the possible existence of right-hand currents. Finally, we discuss the priorities for future theoretical and experimental work with the goal of making the CKM unitarity test more definitive

  1. Hypothesis-driven physical examination curriculum.

    Science.gov (United States)

    Allen, Sharon; Olson, Andrew; Menk, Jeremiah; Nixon, James

    2017-12-01

    Medical students traditionally learn physical examination skills as a rote list of manoeuvres. Alternatives like hypothesis-driven physical examination (HDPE) may promote students' understanding of the contribution of physical examination to diagnostic reasoning. We sought to determine whether first-year medical students can effectively learn to perform a physical examination using an HDPE approach, and then tailor the examination to specific clinical scenarios. Medical students traditionally learn physical examination skills as a rote list of manoeuvres CONTEXT: First-year medical students at the University of Minnesota were taught both traditional and HDPE approaches during a required 17-week clinical skills course in their first semester. The end-of-course evaluation assessed HDPE skills: students were assigned one of two cardiopulmonary cases. Each case included two diagnostic hypotheses. During an interaction with a standardised patient, students were asked to select physical examination manoeuvres in order to make a final diagnosis. Items were weighted and selection order was recorded. First-year students with minimal pathophysiology performed well. All students selected the correct diagnosis. Importantly, students varied the order when selecting examination manoeuvres depending on the diagnoses under consideration, demonstrating early clinical decision-making skills. An early introduction to HDPE may reinforce physical examination skills for hypothesis generation and testing, and can foster early clinical decision-making skills. This has important implications for further research in physical examination instruction. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  2. The Role of Hypothesis in Constructive Design Research

    DEFF Research Database (Denmark)

    Bang, Anne Louise; Krogh, Peter; Ludvigsen, Martin

    2012-01-01

    and solid perspective on how to keep constructive design research on track, this paper offers a model for understanding the role of hypothesis in constructive design research. The model allows for understanding the hypothesis’s relation to research motivation, questions, experiments, evaluation...... position of the hypothesis as a key-governing element even in artistic led research processes....

  3. Heat transfer modelling and stability analysis of selective laser melting

    International Nuclear Information System (INIS)

    Gusarov, A.V.; Yadroitsev, I.; Bertrand, Ph.; Smurov, I.

    2007-01-01

    The process of direct manufacturing by selective laser melting basically consists of laser beam scanning over a thin powder layer deposited on a dense substrate. Complete remelting of the powder in the scanned zone and its good adhesion to the substrate ensure obtaining functional parts with improved mechanical properties. Experiments with single-line scanning indicate, that an interval of scanning velocities exists where the remelted tracks are uniform. The tracks become broken if the scanning velocity is outside this interval. This is extremely undesirable and referred to as the 'balling' effect. A numerical model of coupled radiation and heat transfer is proposed to analyse the observed instability. The 'balling' effect at high scanning velocities (above ∼20 cm/s for the present conditions) can be explained by the Plateau-Rayleigh capillary instability of the melt pool. Two factors stabilize the process with decreasing the scanning velocity: reducing the length-to-width ratio of the melt pool and increasing the width of its contact with the substrate

  4. 5-HTP hypothesis of schizophrenia.

    Science.gov (United States)

    Fukuda, K

    2014-01-01

    To pose a new hypothesis of schizophrenia that affirms and unifies conventional hypotheses. Outside the brain, there are 5-HTP-containing argyrophil cells that have tryptophan hydroxylase 1 without l-aromatic amino acid decarboxylase. Monoamine oxidase in the liver and lung metabolize 5-HT, rather than 5-HTP, and 5-HTP freely crosses the blood-brain barrier, converting to 5-HT in the brain. Therefore I postulate that hyperfunction of 5-HTP-containing argyrophil cells may be a cause of schizophrenia. I investigate the consistency of this hypothesis with other hypotheses using a deductive method. Overactive 5-HTP-containing argyrophil cells produce excess amounts of 5-HTP. Abundant 5-HTP increases 5-HT within the brain (linking to the 5-HT hypothesis), and leads to negative feedback of 5-HT synthesis at the rate-limiting step catalysed by tryptophan hydroxylase 2. Owing to this negative feedback, brain tryptophan is further metabolized via the kynurenine pathway. Increased kynurenic acid contributes to deficiencies of glutamate function and dopamine activity, known causes of schizophrenia. The 5-HTP hypothesis affirms conventional hypotheses, as the metabolic condition caused by acceleration of tryptophan hydroxylase 1 and suppression of tryptophan hydroxylase 2, activates both 5-HT and kynurenic acid. In order to empirically test the theory, it will be useful to monitor serum 5-HTP and match it to different phases of schizophrenia. This hypothesis may signal a new era with schizophrenia treated as a brain-gut interaction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Gut Microbiota and a Selectively Bred Taste Phenotype: A Novel Model of Microbiome-Behavior Relationships.

    Science.gov (United States)

    Lyte, Mark; Fodor, Anthony A; Chapman, Clinton D; Martin, Gary G; Perez-Chanona, Ernesto; Jobin, Christian; Dess, Nancy K

    2016-06-01

    The microbiota-gut-brain axis is increasingly implicated in obesity, anxiety, stress, and other health-related processes. Researchers have proposed that gut microbiota may influence dietary habits, and pathways through the microbiota-gut-brain axis make such a relationship feasible; however, few data bear on the hypothesis. As a first step in the development of a model system, the gut microbiome was examined in rat lines selectively outbred on a taste phenotype with biobehavioral profiles that have diverged with respect to energy regulation, anxiety, and stress. Occidental low and high-saccharin-consuming rats were assessed for body mass and chow, water, and saccharin intake; littermate controls had shared cages with rats in the experimental group but were not assessed. Cecum and colon microbial communities were profiled using Illumina 16S rRNA sequencing and multivariate analysis of microbial diversity and composition. The saccharin phenotype was confirmed (low-saccharin-consuming rats, 0.7Δ% [0.9Δ%]; high-saccharin-consuming rats, 28.1Δ% [3.6Δ%]). Regardless of saccharin exposure, gut microbiota differed between lines in terms of overall community similarity and taxa at lower phylogenetic levels. Specifically, 16 genera in three phyla distinguished the lines at a 10% false discovery rate. The study demonstrates for the first time that rodent lines created through selective pressure on taste and differing on functionally related correlates host different microbial communities. Whether the microbiota are causally related to the taste phenotype or its correlates remains to be determined. These findings encourage further inquiry on the relationship of the microbiome to taste, dietary habits, emotion, and health.

  6. The atomic hypothesis: physical consequences

    International Nuclear Information System (INIS)

    Rivas, Martin

    2008-01-01

    The hypothesis that matter is made of some ultimate and indivisible objects, together with the restricted relativity principle, establishes a constraint on the kind of variables we are allowed to use for the variational description of elementary particles. We consider that the atomic hypothesis not only states the indivisibility of elementary particles, but also that these ultimate objects, if not annihilated, cannot be modified by any interaction so that all allowed states of an elementary particle are only kinematical modifications of any one of them. Therefore, an elementary particle cannot have excited states. In this way, the kinematical group of spacetime symmetries not only defines the symmetries of the system, but also the variables in terms of which the mathematical description of the elementary particles can be expressed in either the classical or the quantum mechanical description. When considering the interaction of two Dirac particles, the atomic hypothesis restricts the interaction Lagrangian to a kind of minimal coupling interaction

  7. Modelling transport of chokka squid (Loligo reynaudii) paralarvae off South Africa: reviewing, testing and extending the ‘Westward Transport Hypothesis'

    CSIR Research Space (South Africa)

    Martins, RS

    2013-08-01

    Full Text Available hydrodynamic model (ROMS) to test the WTH and assessed four factors that might influence successful transport – Release Area, Month, Specific Gravity (body density) and Diel Vertical Migration (DVM) – in numerical experiments that estimated successful transport...

  8. Discussion of the Porter hypothesis

    International Nuclear Information System (INIS)

    1999-11-01

    In the reaction to the long-range vision of RMNO, published in 1996, The Dutch government posed the question whether a far-going and progressive modernization policy will lead to competitive advantages of high-quality products on partly new markets. Such a question is connected to the so-called Porter hypothesis: 'By stimulating innovation, strict environmental regulations can actually enhance competitiveness', from which statement it can be concluded that environment and economy can work together quite well. A literature study has been carried out in order to determine under which conditions that hypothesis is endorsed in the scientific literature and policy documents. Recommendations are given for further studies. refs

  9. The thrifty phenotype hypothesis revisited

    DEFF Research Database (Denmark)

    Vaag, A A; Grunnet, L G; Arora, G P

    2012-01-01

    Twenty years ago, Hales and Barker along with their co-workers published some of their pioneering papers proposing the 'thrifty phenotype hypothesis' in Diabetologia (4;35:595-601 and 3;36:62-67). Their postulate that fetal programming could represent an important player in the origin of type 2...... of the underlying molecular mechanisms. Type 2 diabetes is a multiple-organ disease, and developmental programming, with its idea of organ plasticity, is a plausible hypothesis for a common basis for the widespread organ dysfunctions in type 2 diabetes and the metabolic syndrome. Only two among the 45 known type 2...

  10. Modelling uncertainty due to imperfect forward model and aerosol microphysical model selection in the satellite aerosol retrieval

    Science.gov (United States)

    Määttä, Anu; Laine, Marko; Tamminen, Johanna

    2015-04-01

    This study aims to characterize the uncertainty related to the aerosol microphysical model selection and the modelling error due to approximations in the forward modelling. Many satellite aerosol retrieval algorithms rely on pre-calculated look-up tables of model parameters representing various atmospheric conditions. In the retrieval we need to choose the most appropriate aerosol microphysical models from the pre-defined set of models by fitting them to the observations. The aerosol properties, e.g. AOD, are then determined from the best models. This choice of an appropriate aerosol model composes a notable part in the AOD retrieval uncertainty. The motivation in our study was to account these two sources in the total uncertainty budget: uncertainty in selecting the most appropriate model, and uncertainty resulting from the approximations in the pre-calculated aerosol microphysical model. The systematic model error was analysed by studying the behaviour of the model residuals, i.e. the differences between modelled and observed reflectances, by statistical methods. We utilised Gaussian processes to characterize the uncertainty related to approximations in aerosol microphysics modelling due to use of look-up tables and other non-modelled systematic features in the Level 1 data. The modelling error is described by a non-diagonal covariance matrix parameterised by correlation length, which is estimated from the residuals using computational tools from spatial statistics. In addition, we utilised Bayesian model selection and model averaging methods to account the uncertainty due to aerosol model selection. By acknowledging the modelling error as a source of uncertainty in the retrieval of AOD from observed spectral reflectance, we allow the observed values to deviate from the modelled values within limits determined by both the measurement and modelling errors. This results in a more realistic uncertainty level of the retrieved AOD. The method is illustrated by both

  11. Structure and selection in an autocatalytic binary polymer model

    DEFF Research Database (Denmark)

    Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen

    2014-01-01

    a pool of monomers, highly ordered populations with particular sequence patterns are dynamically selected out of a vast number of possible states. The interplay between the selected microscopic sequence patterns and the macroscopic cooperative structures is examined both analytically and in simulation...

  12. Debates—Hypothesis testing in hydrology: Introduction

    Science.gov (United States)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  13. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  14. Genome-wide selection by mixed model ridge regression and extensions based on geostatistical models.

    Science.gov (United States)

    Schulz-Streeck, Torben; Piepho, Hans-Peter

    2010-03-31

    The success of genome-wide selection (GS) approaches will depend crucially on the availability of efficient and easy-to-use computational tools. Therefore, approaches that can be implemented using mixed models hold particular promise and deserve detailed study. A particular class of mixed models suitable for GS is given by geostatistical mixed models, when genetic distance is treated analogously to spatial distance in geostatistics. We consider various spatial mixed models for use in GS. The analyses presented for the QTL-MAS 2009 dataset pay particular attention to the modelling of residual errors as well as of polygenetic effects. It is shown that geostatistical models are viable alternatives to ridge regression, one of the common approaches to GS. Correlations between genome-wide estimated breeding values and true breeding values were between 0.879 and 0.889. In the example considered, we did not find a large effect of the residual error variance modelling, largely because error variances were very small. A variance components model reflecting the pedigree of the crosses did not provide an improved fit. We conclude that geostatistical models deserve further study as a tool to GS that is easily implemented in a mixed model package.

  15. Mathematical models assuming selective recruitment fitted to data for driver mortality and seat belt use in Japan.

    Science.gov (United States)

    Nakahara, Shinji; Kawamura, Takashi; Ichikawa, Masao; Wakai, Susumu

    2006-01-01

    Previous research has indicated that unbelted drivers are at higher risk of involvement in fatal crashes than belted drivers, suggesting selective recruitment that high-risk drivers are unlikely to become belt users. However, how the risk of involvement in fatal crashes among unbelted drivers varies according to the level of seat belt use among general drivers has yet to be clearly quantified. We, therefore, developed mathematical models describing the risk of fatal crashes in relation to seat belt use among the general public, and explored how these models fitted to changes in driver mortality and changes in observed seat belt use using Japanese data. Mortality data between 1979 and 1994 were obtained from vital statistics, and mortality data in the daytime and nighttime between 1980 and 2001 and belt use data between 1979 and 2001 were obtained from the National Police Agency. Regardless of the data set analyzed, exponential models, assuming that high-risk drivers would gradually become belt users in order of increasing risk as seat belt use among general motorists reached high levels, showed the best fit. Our models provide an insight into behavioral changes among high-risk drivers and support the selective recruitment hypothesis.

  16. Selfing in Haploid Plants and Efficacy of Selection: Codon Usage Bias in the Model Moss Physcomitrella patens.

    Science.gov (United States)

    Szövényi, Péter; Ullrich, Kristian K; Rensing, Stefan A; Lang, Daniel; van Gessel, Nico; Stenøien, Hans K; Conti, Elena; Reski, Ralf

    2017-06-01

    A long-term reduction in effective population size will lead to major shift in genome evolution. In particular, when effective population size is small, genetic drift becomes dominant over natural selection. The onset of self-fertilization is one evolutionary event considerably reducing effective size of populations. Theory predicts that this reduction should be more dramatic in organisms capable for haploid than for diploid selfing. Although theoretically well-grounded, this assertion received mixed experimental support. Here, we test this hypothesis by analyzing synonymous codon usage bias of genes in the model moss Physcomitrella patens frequently undergoing haploid selfing. In line with population genetic theory, we found that the effect of natural selection on synonymous codon usage bias is very weak. Our conclusion is supported by four independent lines of evidence: 1) Very weak or nonsignificant correlation between gene expression and codon usage bias, 2) no increased codon usage bias in more broadly expressed genes, 3) no evidence that codon usage bias would constrain synonymous and nonsynonymous divergence, and 4) predominant role of genetic drift on synonymous codon usage predicted by a model-based analysis. These findings show striking similarity to those observed in AT-rich genomes with weak selection for optimal codon usage and GC content overall. Our finding is in contrast to a previous study reporting adaptive codon usage bias in the moss P. patens. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  17. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  18. Fourier power, subjective distance and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Directory of Open Access Journals (Sweden)

    Mark Daniel Lescroart

    2015-11-01

    Full Text Available Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA, Retrosplenial Complex (RSC, and the Occipital Place Area (OPA. It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1 2D features related to Fourier power; (2 3D spatial features such as the distance to objects in a scene; or (3 abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM to BOLD fMRI responses elicited by a set of 1,386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.

  19. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  20. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  1. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....

  2. An Integrated DEMATEL-QFD Model for Medical Supplier Selection

    OpenAIRE

    Mehtap Dursun; Zeynep Şener

    2014-01-01

    Supplier selection is considered as one of the most critical issues encountered by operations and purchasing managers to sharpen the company’s competitive advantage. In this paper, a novel fuzzy multi-criteria group decision making approach integrating quality function deployment (QFD) and decision making trial and evaluation laboratory (DEMATEL) method is proposed for supplier selection. The proposed methodology enables to consider the impacts of inner dependence among supplier assessment cr...

  3. Evaluation of uncertainties in selected environmental dispersion models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-01-01

    Compliance with standards of radiation dose to the general public has necessitated the use of dispersion models to predict radionuclide concentrations in the environment due to releases from nuclear facilities. Because these models are only approximations of reality and because of inherent variations in the input parameters used in these models, their predictions are subject to uncertainty. Quantification of this uncertainty is necessary to assess the adequacy of these models for use in determining compliance with protection standards. This paper characterizes the capabilities of several dispersion models to predict accurately pollutant concentrations in environmental media. Three types of models are discussed: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations

  4. Model of Selective and Non-Selective Management of Badgers (Meles meles) to Control Bovine Tuberculosis in Badgers and Cattle.

    Science.gov (United States)

    Smith, Graham C; Delahay, Richard J; McDonald, Robbie A; Budgey, Richard

    2016-01-01

    Bovine tuberculosis (bTB) causes substantial economic losses to cattle farmers and taxpayers in the British Isles. Disease management in cattle is complicated by the role of the European badger (Meles meles) as a host of the infection. Proactive, non-selective culling of badgers can reduce the incidence of disease in cattle but may also have negative effects in the area surrounding culls that have been associated with social perturbation of badger populations. The selective removal of infected badgers would, in principle, reduce the number culled, but the effects of selective culling on social perturbation and disease outcomes are unclear. We used an established model to simulate non-selective badger culling, non-selective badger vaccination and a selective trap and vaccinate or remove (TVR) approach to badger management in two distinct areas: South West England and Northern Ireland. TVR was simulated with and without social perturbation in effect. The lower badger density in Northern Ireland caused no qualitative change in the effect of management strategies on badgers, although the absolute number of infected badgers was lower in all cases. However, probably due to differing herd density in Northern Ireland, the simulated badger management strategies caused greater variation in subsequent cattle bTB incidence. Selective culling in the model reduced the number of badgers killed by about 83% but this only led to an overall benefit for cattle TB incidence if there was no social perturbation of badgers. We conclude that the likely benefit of selective culling will be dependent on the social responses of badgers to intervention but that other population factors including badger and cattle density had little effect on the relative benefits of selective culling compared to other methods, and that this may also be the case for disease management in other wild host populations.

  5. Effects of selected operational parameters on efficacy and selectivity of electromembrane extraction. Chlorophenols as model analytes

    Czech Academy of Sciences Publication Activity Database

    Šlampová, Andrea; Kubáň, Pavel; Boček, Petr

    2014-01-01

    Roč. 35, č. 17 (2014), s. 2429-2437 ISSN 0173-0835 R&D Projects: GA ČR(CZ) GA13-05762S Institutional support: RVO:68081715 Keywords : electromembrane extraction * chlorophenols * extraction selectivity Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.028, year: 2014

  6. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  7. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies. The... COMMISSION Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is...

  8. Testing competing forms of the Milankovitch hypothesis

    DEFF Research Database (Denmark)

    Kaufmann, Robert K.; Juselius, Katarina

    2016-01-01

    We test competing forms of the Milankovitch hypothesis by estimating the coefficients and diagnostic statistics for a cointegrated vector autoregressive model that includes 10 climate variables and four exogenous variables for solar insolation. The estimates are consistent with the physical...... ice volume and solar insolation. The estimated adjustment dynamics show that solar insolation affects an array of climate variables other than ice volume, each at a unique rate. This implies that previous efforts to test the strong form of the Milankovitch hypothesis by examining the relationship...... mechanisms postulated to drive glacial cycles. They show that the climate variables are driven partly by solar insolation, determining the timing and magnitude of glaciations and terminations, and partly by internal feedback dynamics, pushing the climate variables away from equilibrium. We argue...

  9. Exploring heterogeneous market hypothesis using realized volatility

    Science.gov (United States)

    Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari

    2013-04-01

    This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.

  10. Natural Selection at Work: An Accelerated Evolutionary Computing Approach to Predictive Model Selection

    Science.gov (United States)

    Akman, Olcay; Hallam, Joshua W.

    2010-01-01

    We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP) as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency. PMID:20661297

  11. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  12. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  13. Model Selection and Risk Estimation with Applications to Nonlinear Ordinary Differential Equation Systems

    DEFF Research Database (Denmark)

    Mikkelsen, Frederik Vissing

    Broadly speaking, this thesis is devoted to model selection applied to ordinary dierential equations and risk estimation under model selection. A model selection framework was developed for modelling time course data by ordinary dierential equations. The framework is accompanied by the R software...... eective computational tools for estimating unknown structures in dynamical systems, such as gene regulatory networks, which may be used to predict downstream eects of interventions in the system. A recommended algorithm based on the computational tools is presented and thoroughly tested in various...... simulation studies and applications. The second part of the thesis also concerns model selection, but focuses on risk estimation, i.e., estimating the error of mean estimators involving model selection. An extension of Stein's unbiased risk estimate (SURE), which applies to a class of estimators with model...

  14. Model selection criteria : how to evaluate order restrictions

    NARCIS (Netherlands)

    Kuiper, R.M.

    2012-01-01

    Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might

  15. The Selection of Turbulence Models for Prediction of Room Airflow

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation...

  16. A Four-Step Model for Teaching Selection Interviewing Skills

    Science.gov (United States)

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  17. Modelling the negative effects of landscape fragmentation on habitat selection

    NARCIS (Netherlands)

    Langevelde, van F.

    2015-01-01

    Landscape fragmentation constrains movement of animals between habitat patches. Fragmentation may, therefore, limit the possibilities to explore and select the best habitat patches, and some animals may have to cope with low-quality patches due to these movement constraints. If so, these individuals

  18. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the

  19. RUC at TREC 2014: Select Resources Using Topic Models

    Science.gov (United States)

    2014-11-01

    preprocess the data by parsing the pages ( html , txt, doc, xls, ppt, pdf, xml files) into tokens, removing the stopwords listed in the Indri’s...Gravano. Classification-Aware Hidden- Web Text Database Selection. ACM Trans. Inf. Syst. Vol. 26, No. 2, Article 6, April 2008. [8] J. Seo and B. W

  20. The Living Dead: Transformative Experiences in Modelling Natural Selection

    Science.gov (United States)

    Petersen, Morten Rask

    2017-01-01

    This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge in a leisure time activity. These transformative…

  1. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  2. Model selection for integrated pest management with stochasticity.

    Science.gov (United States)

    Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel

    2018-04-07

    In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  4. Extra dimensions hypothesis in high energy physics

    Directory of Open Access Journals (Sweden)

    Volobuev Igor

    2017-01-01

    Full Text Available We discuss the history of the extra dimensions hypothesis and the physics and phenomenology of models with large extra dimensions with an emphasis on the Randall- Sundrum (RS model with two branes. We argue that the Standard Model extension based on the RS model with two branes is phenomenologically acceptable only if the inter-brane distance is stabilized. Within such an extension of the Standard Model, we study the influence of the infinite Kaluza-Klein (KK towers of the bulk fields on collider processes. In particular, we discuss the modification of the scalar sector of the theory, the Higgs-radion mixing due to the coupling of the Higgs boson to the radion and its KK tower, and the experimental restrictions on the mass of the radion-dominated states.

  5. Selected Aspects of Computer Modeling of Reinforced Concrete Structures

    Directory of Open Access Journals (Sweden)

    Szczecina M.

    2016-03-01

    Full Text Available The paper presents some important aspects concerning material constants of concrete and stages of modeling of reinforced concrete structures. The problems taken into account are: a choice of proper material model for concrete, establishing of compressive and tensile behavior of concrete and establishing the values of dilation angle, fracture energy and relaxation time for concrete. Proper values of material constants are fixed in simple compression and tension tests. The effectiveness and correctness of applied model is checked on the example of reinforced concrete frame corners under opening bending moment. Calculations are performed in Abaqus software using Concrete Damaged Plasticity model of concrete.

  6. Bronchodilatory and anti-inflammatory properties of inhaled selective phosphodiesterase inhibitors in a guinea pig model of allergic asthma

    NARCIS (Netherlands)

    Santing, R.E; de Boer, J; Rohof, A.A B; van der Zee, N.M; Zaagsma, Hans

    2001-01-01

    In a guinea pig model of allergic asthma, we investigated the effects of the selective phosphodiesterase inhibitors rolipram (phosphodiesterase 4-selective), Org 9935 (phosphodiesterase 3-selective) and Org 20241 (dual phosphodiesterase 4/phosphodiesterase 3-selective), administered by aerosol

  7. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  8. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    Science.gov (United States)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  9. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  10. Optimizing warehouse logistics operations through site selection models : Istanbul, Turkey

    OpenAIRE

    Erdemir, Ugur

    2003-01-01

    Approved for public release; distribution is unlimited This thesis makes a cost benefit analysis of relocating the outdated and earthquake damaged supply distribution center of the Turkish Navy. Given the dynamic environment surrounding the military operations, logistic sustainability requirements, rapid information technology developments, and budget-constrained Turkish DoD acquisition environment, the site selection of a supply distribution center is critical to the future operations and...

  11. Consumer health information seeking as hypothesis testing.

    Science.gov (United States)

    Keselman, Alla; Browne, Allen C; Kaufman, David R

    2008-01-01

    Despite the proliferation of consumer health sites, lay individuals often experience difficulty finding health information online. The present study attempts to understand users' information seeking difficulties by drawing on a hypothesis testing explanatory framework. It also addresses the role of user competencies and their interaction with internet resources. Twenty participants were interviewed about their understanding of a hypothetical scenario about a family member suffering from stable angina and then searched MedlinePlus consumer health information portal for information on the problem presented in the scenario. Participants' understanding of heart disease was analyzed via semantic analysis. Thematic coding was used to describe information seeking trajectories in terms of three key strategies: verification of the primary hypothesis, narrowing search within the general hypothesis area and bottom-up search. Compared to an expert model, participants' understanding of heart disease involved different key concepts, which were also differently grouped and defined. This understanding provided the framework for search-guiding hypotheses and results interpretation. Incorrect or imprecise domain knowledge led individuals to search for information on irrelevant sites, often seeking out data to confirm their incorrect initial hypotheses. Online search skills enhanced search efficiency, but did not eliminate these difficulties. Regardless of their web experience and general search skills, lay individuals may experience difficulty with health information searches. These difficulties may be related to formulating and evaluating hypotheses that are rooted in their domain knowledge. Informatics can provide support at the levels of health information portals, individual websites, and consumer education tools.

  12. River water quality model no. 1 (RWQM1): III. Biochemical submodel selection

    DEFF Research Database (Denmark)

    Vanrolleghem, P.; Borchardt, D.; Henze, Mogens

    2001-01-01

    The new River Water Quality Model no.1 introduced in the two accompanying papers by Shanahan et al. and Reichert et al. is comprehensive. Shanahan et al. introduced a six-step decision procedure to select the necessary model features for a certain application. This paper specifically addresses one...... of these steps, i.e. the selection of submodels of the comprehensive biochemical conversion model introduced in Reichert et al. Specific conditions for inclusion of one or the other conversion process or model component are introduced, as are some general rules that can support the selection. Examples...... of simplified models are presented....

  13. Antiaging therapy: a prospective hypothesis

    Directory of Open Access Journals (Sweden)

    Shahidi Bonjar MR

    2015-01-01

    Full Text Available Mohammad Rashid Shahidi Bonjar,1 Leyla Shahidi Bonjar2 1School of Dentistry, Kerman University of Medical Sciences, Kerman Iran; 2Department of Pharmacology, College of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran Abstract: This hypothesis proposes a new prospective approach to slow the aging process in older humans. The hypothesis could lead to developing new treatments for age-related illnesses and help humans to live longer. This hypothesis has no previous documentation in scientific media and has no protocol. Scientists have presented evidence that systemic aging is influenced by peculiar molecules in the blood. Researchers at Albert Einstein College of Medicine, New York, and Harvard University in Cambridge discovered elevated titer of aging-related molecules (ARMs in blood, which trigger cascade of aging process in mice; they also indicated that the process can be reduced or even reversed. By inhibiting the production of ARMs, they could reduce age-related cognitive and physical declines. The present hypothesis offers a new approach to translate these findings into medical treatment: extracorporeal adjustment of ARMs would lead to slower rates of aging. A prospective “antiaging blood filtration column” (AABFC is a nanotechnological device that would fulfill the central role in this approach. An AABFC would set a near-youth homeostatic titer of ARMs in the blood. In this regard, the AABFC immobilizes ARMs from the blood while blood passes through the column. The AABFC harbors antibodies against ARMs. ARM antibodies would be conjugated irreversibly to ARMs on contact surfaces of the reaction platforms inside the AABFC till near-youth homeostasis is attained. The treatment is performed with the aid of a blood-circulating pump. Similar to a renal dialysis machine, blood would circulate from the body to the AABFC and from there back to the body in a closed circuit until ARMs were sufficiently depleted from the blood. The

  14. A Molecular–Structure Hypothesis

    Directory of Open Access Journals (Sweden)

    Jan C. A. Boeyens

    2010-11-01

    Full Text Available The self-similar symmetry that occurs between atomic nuclei, biological growth structures, the solar system, globular clusters and spiral galaxies suggests that a similar pattern should characterize atomic and molecular structures. This possibility is explored in terms of the current molecular structure-hypothesis and its extension into four-dimensional space-time. It is concluded that a quantum molecule only has structure in four dimensions and that classical (Newtonian structure, which occurs in three dimensions, cannot be simulated by quantum-chemical computation.

  15. Default Bayes Factors for Model Selection in Regression

    Science.gov (United States)

    Rouder, Jeffrey N.; Morey, Richard D.

    2012-01-01

    In this article, we present a Bayes factor solution for inference in multiple regression. Bayes factors are principled measures of the relative evidence from data for various models or positions, including models that embed null hypotheses. In this regard, they may be used to state positive evidence for a lack of an effect, which is not possible…

  16. The analysis of the capacity of the selected measures of decision-making models in companies

    OpenAIRE

    Helena Kościelniak; Beata Skowron-Grabowska; Sylwia Łęgowik-Świącik; Małgorzata Łęgowik-Małolepsza

    2015-01-01

    The paper aims at the analysis of the information capacity of selected instruments of the assessment of decision-making models in the analyzed companies. In the paper there are presented the idea and concepts of decision-making models. There have been discussed the selected instruments of the assessment of decision-making models in enterprises. In the final part of the paper there has been held the quantification of decision- making models in the investigated cement industry companies. To mee...

  17. Varying Coefficient Panel Data Model in the Presence of Endogenous Selectivity and Fixed Effects

    OpenAIRE

    Malikov, Emir; Kumbhakar, Subal C.; Sun, Yiguo

    2013-01-01

    This paper considers a flexible panel data sample selection model in which (i) the outcome equation is permitted to take a semiparametric, varying coefficient form to capture potential parameter heterogeneity in the relationship of interest, (ii) both the outcome and (parametric) selection equations contain unobserved fixed effects and (iii) selection is generalized to a polychotomous case. We propose a two-stage estimator. Given consistent parameter estimates from the selection equation obta...

  18. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    Science.gov (United States)

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP

  19. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    Science.gov (United States)

    Gretchen H. Roffler; Michael K. Schwartz; Kristine Pilgrim; Sandra L. Talbot; George K. Sage; Layne G. Adams; Gordon Luikart

    2016-01-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is...

  20. Is PMI the Hypothesis or the Null Hypothesis?

    Science.gov (United States)

    Tarone, Aaron M; Sanford, Michelle R

    2017-09-01

    Over the past several decades, there have been several strident exchanges regarding whether forensic entomologists estimate the postmortem interval (PMI), minimum PMI, or something else. During that time, there has been a proliferation of terminology reflecting this concern regarding "what we do." This has been a frustrating conversation for some in the community because much of this debate appears to be centered on what assumptions are acknowledged directly and which are embedded within a list of assumptions (or ignored altogether) in the literature and in case reports. An additional component of the conversation centers on a concern that moving away from the use of certain terminology like PMI acknowledges limitations and problems that would make the application of entomology appear less useful in court-a problem for lawyers, but one that should not be problematic for scientists in the forensic entomology community, as uncertainty is part of science that should and can be presented effectively in the courtroom (e.g., population genetic concepts in forensics). Unfortunately, a consequence of the way this conversation is conducted is that even as all involved in the debate acknowledge the concerns of their colleagues, parties continue to talk past one another advocating their preferred terminology. Progress will not be made until the community recognizes that all of the terms under consideration take the form of null hypothesis statements and that thinking about "what we do" as a null hypothesis has useful legal and scientific ramifications that transcend arguments over the usage of preferred terminology. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Biostatistics series module 2: Overview of hypothesis testing

    Directory of Open Access Journals (Sweden)

    Avijit Hazra

    2016-01-01

    Full Text Available Hypothesis testing (or statistical inference is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric and the number of groups or data sets being compared (e.g., two or more than two at a time. The same research question may be explored by more than one type of hypothesis test

  2. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  3. Fuzzy Multicriteria Model for Selection of Vibration Technology

    Directory of Open Access Journals (Sweden)

    María Carmen Carnero

    2016-01-01

    Full Text Available The benefits of applying the vibration analysis program are well known and have been so for decades. A large number of contributions have been produced discussing new diagnostic, signal treatment, technical parameter analysis, and prognosis techniques. However, to obtain the expected benefits from a vibration analysis program, it is necessary to choose the instrumentation which guarantees the best results. Despite its importance, in the literature, there are no models to assist in taking this decision. This research describes an objective model using Fuzzy Analytic Hierarchy Process (FAHP to make a choice of the most suitable technology among portable vibration analysers. The aim is to create an easy-to-use model for processing, manufacturing, services, and research organizations, to guarantee adequate decision-making in the choice of vibration analysis technology. The model described recognises that judgements are often based on ambiguous, imprecise, or inadequate information that cannot provide precise values. The model incorporates judgements from several decision-makers who are experts in the field of vibration analysis, maintenance, and electronic devices. The model has been applied to a Health Care Organization.

  4. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  5. The conscious access hypothesis: Explaining the consciousness.

    Science.gov (United States)

    Prakash, Ravi

    2008-01-01

    The phenomenon of conscious awareness or consciousness is complicated but fascinating. Although this concept has intrigued the mankind since antiquity, exploration of consciousness from scientific perspectives is not very old. Among myriad of theories regarding nature, functions and mechanism of consciousness, off late, cognitive theories have received wider acceptance. One of the most exciting hypotheses in recent times has been the "conscious access hypotheses" based on the "global workspace model of consciousness". It underscores an important property of consciousness, the global access of information in cerebral cortex. Present article reviews the "conscious access hypothesis" in terms of its theoretical underpinnings as well as experimental supports it has received.

  6. Bridging the Gap: From Model Surfaces to Nanoparticle Analogs for Selective Oxidation and Steam Reforming of Methanol and Selective Hydrogenation Catalysis

    Science.gov (United States)

    Boucher, Matthew B.

    Most industrial catalysts are very complex, comprising of non-uniform materials with varying structures, impurities, and interaction between the active metal and supporting substrate. A large portion of the ongoing research in heterogeneous catalysis focuses on understanding structure-function relationships in catalytic materials. In parallel, there is a large area of surface science research focused on studying model catalytic systems for which structural parameters can be tuned and measured with high precision. It is commonly argued, however, that these systems are oversimplified, and that observations made in model systems do not translate to robust catalysts operating in practical environments; this discontinuity is often referred to as a "gap." The focus of this thesis is to explore the mutual benefits of surface science and catalysis, or "bridge the gap," by studying two catalytic systems in both ultra-high vacuum (UHV) and near ambient-environments. The first reaction is the catalytic steam reforming of methanol (SRM) to hydrogen and carbon dioxide. The SRM reaction is a promising route for on-demand hydrogen production. For this catalytic system, the central hypothesis in this thesis is that a balance between redox capability and weak binding of reaction intermediates is necessary for high SRM activity and selectivity to carbon dioxide. As such, a new catalyst for the SRM reaction is developed which incorporates very small amounts of gold (liquid-phase, stirred-tank batch reactor under a hydrogen head pressure of approximately 7 bar. Palladium alloyed into the surface of otherwise inactive copper nanoparticles shows a marked improvement in selectivity when compared to monometallic palladium catalysts with the same metal loading. This effect is attributed hydrogen spillover onto the copper surface. In summary, the development of new, highly active and selective catalysts for the methanol steam reforming reaction and for the partial hydrogenation of alkynes

  7. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  8. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  9. Selected developments and applications of Leontief models in industrial ecology

    International Nuclear Information System (INIS)

    Stroemman, Anders Hammer

    2005-01-01

    Thesis Outline: This thesis investigates issues of environmental repercussions on processes of three spatial scales; a single process plant, a regional value chain and the global economy. The first paper investigates environmental repercussions caused by a single process plant using an open Leontief model with combined physical and monetary units in what is commonly referred to as a hybrid life cycle model. Physical capital requirements are treated as any other good. Resources and environmental stressors, thousands in total, are accounted for and assessed by aggregation using standard life cycle impact assessment methods. The second paper presents a methodology for establishing and combining input-output matrices and life-cycle inventories in a hybrid life cycle inventory. Information contained within different requirements matrices are combined and issues of double counting that arise are addressed and methods for eliminating these are developed and presented. The third paper is an extension of the first paper. Here the system analyzed is increased from a single plant and component in the production network to a series of nodes, constituting a value chain. The hybrid framework proposed in paper two is applied to analyze the use of natural gas, methanol and hydrogen as transportation fuels. The fourth paper presents the development of a World Trade Model with Bilateral Trade, an extension of the World Trade Model (Duchin, 2005). The model is based on comparative advantage and is formulated as a linear program. It endogenously determines the regional output of sectors and bilateral trade flows between regions. The model may be considered a Leontief substitution model where substitution of production is allowed between regions. The primal objective of the model requires the minimization of global factor costs. The fifth paper demonstrates how the World Trade Model with Bilateral Trade can be applied to address questions relevant for industrial ecology. The model is

  10. Selection of References in Wind Turbine Model Predictive Control Design

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Hovgaard, Tobias

    2015-01-01

    a model predictive controller for a wind turbine. One of the important aspects for a tracking control problem is how to setup the optimal reference tracking problem, as it might be relevant to track, e.g., the three concurrent references: optimal pitch angle, optimal rotational speed, and optimal power....... The importance if the individual references differ depending in particular on the wind speed. In this paper we investigate the performance of a reference tracking model predictive controller with two different setups of the used optimal reference signals. The controllers are evaluated using an industrial high...

  11. Selection of antioxidants against ovarian oxidative stress in mouse model.

    Science.gov (United States)

    Li, Bojiang; Weng, Qiannan; Liu, Zequn; Shen, Ming; Zhang, Jiaqing; Wu, Wangjun; Liu, Honglin

    2017-12-01

    Oxidative stress (OS) plays an important role in the process of ovarian granulosa cell apoptosis and follicular atresia. The aim of this study was to select antioxidant against OS in ovary tissue. Firstly, we chose the six antioxidants and analyzed the reactive oxygen species (ROS) level in the ovary tissue. The results showed that proanthocyanidins, gallic acid, curcumin, and carotene decrease the ROS level compared with control group. We further demonstrated that both proanthocyanidins and gallic acid increase the antioxidant enzymes activity. Moreover, change in the ROS level was not observed in proanthocyanidins and gallic acid group of brain, liver, spleen, and kidney tissues. Finally, we found that proanthocyanidins and gallic acid inhibit pro-apoptotic genes expression in granulosa cells. Taken together, proanthocyanidins and gallic acid may be the most acceptable and optimal antioxidants specifically against ovarian OS and also may be involved in the inhibition of granulosa cells apoptosis in mouse ovary. © 2017 Wiley Periodicals, Inc.

  12. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    Science.gov (United States)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  13. Broken selection rule in the quantum Rabi model

    NARCIS (Netherlands)

    Forn Diaz, P.; Gonzalez-Romero, E; Harmans, C.J.P.M.; Solano, E; Mooij, J.E.

    2016-01-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling

  14. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  15. The Applicability of Selected Evaluation Models to Evolving Investigative Designs.

    Science.gov (United States)

    Smith, Nick L.; Hauer, Diane M.

    1990-01-01

    Ten evaluation models are examined in terms of their applicability to investigative, emergent design programs: Stake's portrayal, Wolf's adversary, Patton's utilization, Guba's investigative journalism, Scriven's goal-free, Scriven's modus operandi, Eisner's connoisseurial, Stufflebeam's CIPP, Tyler's objective based, and Levin's cost…

  16. Modeling Selected Climatic Variables in Ibadan, Oyo State, Nigeria ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-09-01

    Sep 1, 2013 ... The aim of this study was fitting the modified generalized burr density function to total rainfall and temperature data obtained from the meteorological unit in the Department of. Environmental Modelling and Management of the Forestry Research Institute of Nigeria. (FRIN) in Ibadan, Oyo State, Nigeria.

  17. Model Selection for Nondestructive Quantification of Fruit Growth in Pepper

    NARCIS (Netherlands)

    Wubs, A.M.; Ma, Y.T.; Heuvelink, E.; Hemerik, L.; Marcelis, L.F.M.

    2012-01-01

    Quantifying fruit growth can be desirable for several purposes (e.g., prediction of fruit yield and size, or for the use in crop simulation models). The goal of this article was to determine the best sigmoid function to describe fruit growth of pepper (Capsicum annuum) from nondestructive fruit

  18. The Optimal Portfolio Selection Model under g-Expectation

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    complicated and sophisticated, the optimal solution turns out to be surprisingly simple, the payoff of a portfolio of two binary claims. Also I give the economic meaning of my model and the comparison with that one in the work of Jin and Zhou, 2008.

  19. Selecting Tools to Model Integer and Binomial Multiplication

    Science.gov (United States)

    Pratt, Sarah Smitherman; Eddy, Colleen M.

    2017-01-01

    Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…

  20. An individual-level selection model for the apparent altruism ...

    Indian Academy of Sciences (India)

    Amotz Zahavi

    2018-02-16

    Feb 16, 2018 ... remain solitary when the rest have completed aggregation. Their response to starvation (apparently) is not to become part of an aggregate, but instead to take a chance on a fresh source of food appearing quickly. Modelling shows that given the right environmental conditions, this can work. (Tarnita et al.

  1. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...... evolving fields like additive manufacturing....

  2. Selected Constitutive Models for Simulating the Hygromechanical Response of Wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund

    The present thesis is a compilation of papers. Three of the papers, I , VI and VII, are published in this thesis only, i.e., an introductory paper and two so-called discussion papers. The papers II, III and V have been published in the international journal, Holzforschung. Paper IV is a conferenc...... paper presented at the 19th Nordic Seminar on Computational Mechanics, Lund, Sweden, 2006. Paper I: The theories for the phenomena leading to hygromechanical response of wood relate to the orthotropic cellular structure and the hydrophilic and hydrophobic polymers constituting the cells...... of wood as a state in the sorption hysteresis space, which is independent of the condition of water vapor in the lumens. Two approaches are developed and tested by implementation into commercial software. Paper VI: The temperature dependencies of the hysteretic multi-Fickian moisture transport model...... are discussed. The constitutive moisture transport models are coupled with a heat transport model yielding terms that describe the so-called Dufour and Sorret effects, however, with multiple phases and hysteresis included. Paper VII: In this paper the modeling of transverse couplings in creep of wood...

  3. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  4. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  5. The regulation of the air: a hypothesis

    Directory of Open Access Journals (Sweden)

    E. G. Nisbet

    2012-03-01

    Full Text Available We propose the hypothesis that natural selection, acting on the specificity or preference for CO2 over O2 of the enzyme rubisco (ribulose-1,5-bisphosphate carboxylase/oxygenase, has controlled the CO2:O2 ratio of the atmosphere since the evolution of photosynthesis and has also sustained the Earth's greenhouse-set surface temperature. Rubisco works in partnership with the nitrogen-fixing enzyme nitrogenase to control atmospheric pressure. Together, these two enzymes control global surface temperature and indirectly the pH and oxygenation of the ocean. Thus, the co-evolution of these two enzymes may have produced clement conditions on the Earth's surface, allowing life to be sustained.

  6. Model-supported selection of distribution coefficients for performance assessment

    International Nuclear Information System (INIS)

    Ochs, M.; Lothenbach, B.; Shibata, Hirokazu; Yui, Mikazu

    1999-01-01

    A thermodynamic speciation/sorption model is used to illustrate typical problems encountered in the extrapolation of batch-type K d values to repository conditions. For different bentonite-groundwater systems, the composition of the corresponding equilibrium solutions and the surface speciation of the bentonite is calculated by treating simultaneously solution equilibria of soluble components of the bentonite as well as ion exchange and acid/base reactions at the bentonite surface. K d values for Cs, Ra, and Ni are calculated by implementing the appropriate ion exchange and surface complexation equilibria in the bentonite model. Based on this approach, hypothetical batch experiments are contrasted with expected conditions in compacted backfill. For each of these scenarios, the variation of K d values as a function of groundwater composition is illustrated for Cs, Ra, and Ni. The applicability of measured, batch-type K d values to repository conditions is discussed. (author)

  7. Selected topics in phenomenology of the standard model

    International Nuclear Information System (INIS)

    Roberts, R.G.

    1992-01-01

    We begin with the structure of the proton which is revealed through deep inelastic scattering of nucleons by electron/muon or neutrino scattering off nucleons. The quark parton model is described which leads on to the interaction of quarks and gluons - quantum chromodynamics (QCD). From this parton distributions can be extracted and then fed into the quark parton description of hadron-hadron collisions. In this way we analyse large p T jet production, prompt photon production and dilepton, W and Z production (Drell-Yan mechanism), ending with a study of heavy quark production. W and Z physics is then discussed. The various definitions at the tree level of sin 2 θ w are listed and then the radiative corrections to these are briefly considered. The data from European Large Electron-Positron storage rings (LEP) then allow limits to be set on the mass of the top quark and the Higgs via these corrections. Standard model predictions for the various Z widths are compared with the latest LEP data. Electroweak effects in e + e - scattering are discussed together with the extraction of the various vector and axial-vector couplings involved. We return to QCD when the production of jets in e + e - is studied. Both the LEP and lower energy data are able to give quantitative estimates of the strong coupling α s and the consistency of the various estimates and those from other QCD processes are discussed. The value of α s (M z ) actually plays an important role in setting the scale of the possible supersymmetry (SUSY) physics beyond the standard model. Finally the subject of quark mixing is addressed. How the the values of the various CKM matrix elements are derived is discussed together with a very brief look at the charge-parity (CP) violation and how the standard model is standing up to the latest measurements of ε'/ε. (Author)

  8. Barbie selected for QM1 as role models change

    OpenAIRE

    Eitelberg, Mark J.

    1991-01-01

    An article discussing the changing role models and attitudes of younf women as reflected in the introduction of Army Barbie, Air Force Barbie and Navy Barbie dolls for children. The author's commentary discusses the differences in each service's approach to the dolls, and their importance as part of the culture, as much an American institution as a toy. The author notes that the manufacturer's willingness to accept the attitude that the military is an acceptable career choice for young women...

  9. A Model for Service Life Control of Selected Device Systems

    Directory of Open Access Journals (Sweden)

    Zieja Mariusz

    2014-04-01

    Full Text Available This paper presents a way of determining distribution of limit state exceedence time by a diagnostic parameter which determines accuracy of maintaining zero state. For calculations it was assumed that the diagnostic parameter is deviation from nominal value (zero state. Change of deviation value occurs as a result of destructive processes which occur during service. For estimation of deviation increasing rate in probabilistic sense, was used a difference equation from which, after transformation, Fokker-Planck differential equation was obtained [4, 11]. A particular solution of the equation is deviation increasing rate density function which was used for determining exceedance probability of limit state. The so-determined probability was then used to determine density function of limit state exceedance time, by increasing deviation. Having at disposal the density function of limit state exceedance time one determined service life of a system of maladjustment. In the end, a numerical example based on operational data of selected aircraft [weapon] sights was presented. The elaborated method can be also applied to determining residual life of shipboard devices whose technical state is determined on the basis of analysis of values of diagnostic parameters.

  10. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  11. How do Economic Growth Asymmetry and Inflation Expectations Affect Fisher Hypothesis and Fama’s Proxy Hypothesis?

    Directory of Open Access Journals (Sweden)

    Yuan-Ming Lee

    2017-12-01

    Full Text Available Based on the threshold panel data model, this study employs the quarterly panel data of 38 countries between 1981 and 2014 to test whether economic growth asymmetry, expected inflation, and unexpected inflation affect the Fisher hypothesis and Fama’s proxy hypothesis. The empirical results show the following: (1 When real economic growth rate is greater than the threshold (-0.009, Fisher hypothesis is supported. (2 When real economic growth rate is less than the threshold (-0.009, two scenarios hold true: before real variables are included, Fisher hypothesis is rejected; and when real variables are included, real economic growth is negative, inflation is expected, and thus, Fama’s hypothesis is supported.

  12. On extended liability in a model of adverse selection

    OpenAIRE

    Dieter Balkenborg

    2004-01-01

    We consider a model where a judgment-proof firm needs finance to realize a project. This project might cause an environmental hazard with a probability that is the private knowledge of the firm. Thus there is asymmetric information with respect to the environmental riskiness of the project. We consider the implications of a simple joint and strict liability rule on the lender and the firm where, in case of a damage, the lender is responsible for that part of the liability which the judgment-p...

  13. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  14. Bayesian Variable Selection in Multilevel Item Response Theory Models with Application in Genomics.

    Science.gov (United States)

    Fragoso, Tiago M; de Andrade, Mariza; Pereira, Alexandre C; Rosa, Guilherme J M; Soler, Júlia M P

    2016-04-01

    The goal of this paper is to present an implementation of stochastic search variable selection (SSVS) to multilevel model from item response theory (IRT). As experimental settings get more complex and models are required to integrate multiple (and sometimes massive) sources of information, a model that can jointly summarize and select the most relevant characteristics can provide better interpretation and a deeper insight into the problem. A multilevel IRT model recently proposed in the literature for modeling multifactorial diseases is extended to perform variable selection in the presence of thousands of covariates using SSVS. We derive conditional distributions required for such a task as well as an acceptance-rejection step that allows for the SSVS in high dimensional settings using a Markov Chain Monte Carlo algorithm. We validate the variable selection procedure through simulation studies, and illustrate its application on a study with genetic markers associated with the metabolic syndrome. © 2016 WILEY PERIODICALS, INC.

  15. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  16. The Carnivore Connection Hypothesis: Revisited

    Directory of Open Access Journals (Sweden)

    Jennie C. Brand-Miller

    2012-01-01

    Full Text Available The “Carnivore Connection” hypothesizes that, during human evolution, a scarcity of dietary carbohydrate in diets with low plant : animal subsistence ratios led to insulin resistance providing a survival and reproductive advantage with selection of genes for insulin resistance. The selection pressure was relaxed at the beginning of the Agricultural Revolution when large quantities of cereals first entered human diets. The “Carnivore Connection” explains the high prevalence of intrinsic insulin resistance and type 2 diabetes in populations that transition rapidly from traditional diets with a low-glycemic load, to high-carbohydrate, high-glycemic index diets that characterize modern diets. Selection pressure has been relaxed longest in European populations, explaining a lower prevalence of insulin resistance and type 2 diabetes, despite recent exposure to famine and food scarcity. Increasing obesity and habitual consumption of high-glycemic-load diets worsens insulin resistance and increases the risk of type 2 diabetes in all populations.

  17. Selective Cooperation in Early Childhood - How to Choose Models and Partners.

    Directory of Open Access Journals (Sweden)

    Jonas Hermes

    Full Text Available Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models on a rational basis. The present study investigated whether children (and adults also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64 and adults (N = 14 saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects. Participants then performed different tasks (cooperative problem solving and word learning requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks, but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks. Young children's selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks.

  18. A hypothesis and a case-study projection of an influence of MJO modulation on boreal-summer tropical cyclogenesis in a warmer climate with a global non-hydrostatic model: a transition toward the central Pacific?

    Directory of Open Access Journals (Sweden)

    KAZUYOSHI eOOUCHI

    2014-02-01

    Full Text Available The eastward shift of the enhanced activity of tropical cyclone to the central Pacific is a robust projection result for a future warmer climate, and is shared by most of the state-of-the-art climate models. The shift has been argued to originate from the underlying El-Ñino like sea-surface temperature (SST forcing. This study explores the possibility that the change of the activity of the Madden-Julian Oscillation (MJO can be an additional, if not alternative, contributor to the shift, using the dataset of Yamada et al. (2010 from a global non-hydrostatic 14-km grid mesh time-slice experiment for a boreal-summer case. Within the case-study framework, we develop the hypothesis that an eastward shift of the high-activity area of the MJO, as manifested itself as the significant intra-seasonal modulation of the enhanced precipitation, is associated with the increased tropical cyclogenesis potential over the North central Pacific by regulating cyclonic relative vorticity and vertical shear. In contrast, the North Indian Ocean and maritime continent undergo relatively diminished genesis potential. An implication is that uncertainty in the future tropical cyclogenesis in some part of the Pacific and other ocean basins could be reduced if projection of the MJO and its connection with the underlying SST environment can be better understood and constrained by the improvement of climate models.

  19. Metabolic syndrome--neurotrophic hypothesis.

    Science.gov (United States)

    Hristova, M; Aloe, L

    2006-01-01

    An increasing number of researchers of the metabolic syndrome assume that many mechanisms are involved in its complex pathophysiology such as an increased sympathetic activity, disorders of the hypothalamo-pituitary-adrenal axis, the action of chronic subclinical infections, proinflammatory cytokines, and the effect of adipocytokines or psychoemotional stress. An increasing body of scientific research in this field confirms the role of the neurotrophins and mastocytes in the pathogenesis of inflammatory and immune diseases. Recently it has been proved that neurotrophins and mastocytes have metabotrophic effects and take part in the carbohydrate and lipid metabolism. In the early stage of the metabolic syndrome we established a statistically significant increase in the plasma levels of the nerve growth factor. In the generalized stage the plasma levels of the neutrophines were statistically decreased in comparison to those in the healthy controls. We consider that the neurotrophin deficit is likely to play a significant pathogenic role in the development of the metabolic anthropometric and vascular manifestations of the generalized stage of MetSyn. We suggest a hypothesis for the etiopathogenesis of the metabolic syndrome based on the neuro-immuno-endocrine interactions. The specific pathogenic pathways of MetSyn development include: (1) increased tissue and plasma levels of proinflammatory cytokines Interleukin-1(IL-1), Interleukin-6 (IL-6 ) and tumor necrosis factor - alpha (TNF-alpha) caused by inflammatory and/or emotional distress; (2) increased plasma levels of neurotrophin - nerve growth factor (NGF) caused by the high IL-1, IL-6 and TNFalpha levels; (3) high plasma levels of NGF which enhance activation of: the autonomous nerve system--vegetodystonia (disbalance of neurotransmitters); Neuropeptide Y (NPY)--enhanced feeding, obesity and increased leptin plasma levels; hypothalamo-pituitary-adrenal axis--increased corticotropin-releasing hormone (CRH) and

  20. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  2. Impacts of selected dietary polyphenols on caramelization in model systems.

    Science.gov (United States)

    Zhang, Xinchen; Chen, Feng; Wang, Mingfu

    2013-12-15

    This study investigated the impacts of six dietary polyphenols (phloretin, naringenin, quercetin, epicatechin, chlorogenic acid and rosmarinic acid) on fructose caramelization in thermal model systems at either neutral or alkaline pH. These polyphenols were found to increase the browning intensity and antioxidant capacity of caramel. The chemical reactions in the system of sugar and polyphenol, which include formation of polyphenol-sugar adducts, were found to be partially responsible for the formation of brown pigments and heat-induced antioxidants based on instrumental analysis. In addition, rosmarinic acid was demonstrated to significantly inhibit the formation of 5-hydroxymethylfurfural (HMF). Thus this research added to the efforts of controlling caramelization by dietary polyphenols under thermal condition, and provided some evidence to propose dietary polyphenols as functional ingredients to modify the caramel colour and bioactivity as well as to lower the amount of heat-induced contaminants such as 5-hydroxymethylfurfural (HMF). Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Selected topics in phenomenology of the standard model

    International Nuclear Information System (INIS)

    Roberts, R.G.

    1991-01-01

    These lectures cover some aspects of phenomenology of topics in high energy physics which advertise the success of the standard model in dealing with a wide variety of experimental data. First we begin with a look at deep inelastic scattering. This tells us about the structure of the nucleon, which is understood in terms of the SU(3) gauge theory of QCD, which then allows the information on quark and gluon distributions to be carried over to other 'hard' processes such as hadronic production of jets. Recent data on electroweak processes can estimate the value of Sin 2 θw to a precision where the inclusion of radiative corrections allow bounds to be made on the mass of the top quark. Electroweak effects arise in e + e - collisions, but we first present a review of the recent history of this topic within the context of QCD. We bring the subject up to date with a look at the physics at (or near) the Z pole where the measurement of asymmetries can give more information. We look at the conventional description of quark mixing by the CKM matrix and see how the mixing parameters are systematically being extracted from a variety of reactions and decays. In turn, the values can be used to set bounds on the top quark mass. The matter of CP violation in weak interactions is addressed within the context of the standard model, recent data on ε'/ε being the source of current excitement. Finally, we at the theoretical description and experimental efforts to search for the top quark. (author)

  4. Hypothesis Testing as an Act of Rationality

    Science.gov (United States)

    Nearing, Grey

    2017-04-01

    Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.

  5. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    OpenAIRE

    Hofner, Benjamin; Mayr, Andreas; Schmid, Matthias

    2014-01-01

    Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we...

  6. Evaluating the influence of selected parameters on sensitivity of a numerical model of solidification

    OpenAIRE

    N. Sczygiol; R. Dyja

    2007-01-01

    Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM). The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods....

  7. The oxidative hypothesis of senescence

    Directory of Open Access Journals (Sweden)

    Gilca M

    2007-01-01

    Full Text Available The oxidative hypothesis of senescence, since its origin in 1956, has garnered significant evidence and growing support among scientists for the notion that free radicals play an important role in ageing, either as "damaging" molecules or as signaling molecules. Age-increasing oxidative injuries induced by free radicals, higher susceptibility to oxidative stress in short-lived organisms, genetic manipulations that alter both oxidative resistance and longevity and the anti-ageing effect of caloric restriction and intermittent fasting are a few examples of accepted scientific facts that support the oxidative theory of senescence. Though not completely understood due to the complex "network" of redox regulatory systems, the implication of oxidative stress in the ageing process is now well documented. Moreover, it is compatible with other current ageing theories (e.g., those implicating the mitochondrial damage/mitochondrial-lysosomal axis, stress-induced premature senescence, biological "garbage" accumulation, etc. This review is intended to summarize and critically discuss the redox mechanisms involved during the ageing process: sources of oxidant agents in ageing (mitochondrial -electron transport chain, nitric oxide synthase reaction- and non-mitochondrial- Fenton reaction, microsomal cytochrome P450 enzymes, peroxisomal β -oxidation and respiratory burst of phagocytic cells, antioxidant changes in ageing (enzymatic- superoxide dismutase, glutathione-reductase, glutathion peroxidase, catalase- and non-enzymatic glutathione, ascorbate, urate, bilirubine, melatonin, tocopherols, carotenoids, ubiquinol, alteration of oxidative damage repairing mechanisms and the role of free radicals as signaling molecules in ageing.

  8. The venom optimization hypothesis revisited.

    Science.gov (United States)

    Morgenstern, David; King, Glenn F

    2013-03-01

    Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Alien abduction: a medical hypothesis.

    Science.gov (United States)

    Forrest, David V

    2008-01-01

    In response to a new psychological study of persons who believe they have been abducted by space aliens that found that sleep paralysis, a history of being hypnotized, and preoccupation with the paranormal and extraterrestrial were predisposing experiences, I noted that many of the frequently reported particulars of the abduction experience bear more than a passing resemblance to medical-surgical procedures and propose that experience with these may also be contributory. There is the altered state of consciousness, uniformly colored figures with prominent eyes, in a high-tech room under a round bright saucerlike object; there is nakedness, pain and a loss of control while the body's boundaries are being probed; and yet the figures are thought benevolent. No medical-surgical history was apparently taken in the above mentioned study, but psychological laboratory work evaluated false memory formation. I discuss problems in assessing intraoperative awareness and ways in which the medical hypothesis could be elaborated and tested. If physicians are causing this syndrome in a percentage of patients, we should know about it; and persons who feel they have been abducted should be encouraged to inform their surgeons and anesthesiologists without challenging their beliefs.

  10. Model selection in Bayesian segmentation of multiple DNA alignments.

    Science.gov (United States)

    Oldmeadow, Christopher; Keith, Jonathan M

    2011-03-01

    The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.

  11. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  12. Augmented Self-Modeling as a Treatment for Children with Selective Mutism.

    Science.gov (United States)

    Kehle, Thomas J.; Madaus, Melissa R.; Baratta, Victoria S.; Bray, Melissa A.

    1998-01-01

    Describes the treatment of three children experiencing selective mutism. The procedure utilized incorporated self-modeling, mystery motivators, self-reinforcement, stimulus fading, spacing, and antidepressant medication. All three children evidenced a complete cessation of selective mutism and maintained their treatment gains at follow-up.…

  13. Towards a pro-health food-selection model for gatekeepers in ...

    African Journals Online (AJOL)

    The purpose of this study was to develop a pro-health food selection model for gatekeepers of Bulawayo high-density suburbs in Zimbabwe. Gatekeepers in five suburbs constituted the study population from which a sample of 250 subjects was randomly selected. Of the total respondents (N= 182), 167 had their own ...

  14. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    Science.gov (United States)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-11-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  15. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  16. Towards a pro-health food-selection model for gatekeepers in ...

    African Journals Online (AJOL)

    . MINITAB Release 10.2-computer package was used to extract dimensions underlying gatekeepers' food selection. Two models, both of which were not health oriented, were determined from the study findings. Family networks received the ...

  17. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    Science.gov (United States)

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  18. Decision support model for selecting and evaluating suppliers in the construction industry

    Directory of Open Access Journals (Sweden)

    Fernando Schramm

    2012-12-01

    Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.

  19. Hypothesis Testing of Parameters for Ordinary Linear Circular Regression

    Directory of Open Access Journals (Sweden)

    Abdul Ghapor Hussin

    2006-07-01

    Full Text Available This paper presents the hypothesis testing of parameters for ordinary linear circular regression model assuming the circular random error distributed as von Misses distribution. The main interests are in testing of the intercept and slope parameter of the regression line. As an illustration, this hypothesis testing will be used in analyzing the wind and wave direction data recorded by two different techniques which are HF radar system and anchored wave buoy.

  20. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  1. An Integrated Hypothesis on the Domestication of Bactris gasipaes.

    Science.gov (United States)

    Galluzzi, Gea; Dufour, Dominique; Thomas, Evert; van Zonneveld, Maarten; Escobar Salamanca, Andrés Felipe; Giraldo Toro, Andrés; Rivera, Andrés; Salazar Duque, Hector; Suárez Baron, Harold; Gallego, Gerardo; Scheldeman, Xavier; Gonzalez Mejia, Alonso

    2015-01-01

    Peach palm (Bactris gasipaes Kunth) has had a central place in the livelihoods of people in the Americas since pre-Columbian times, notably for its edible fruits and multi-purpose wood. The botanical taxon includes both domesticated and wild varieties. Domesticated var gasipaes is believed to derive from one or more of the three wild types of var. chichagui identified today, although the exact dynamics and location of the domestication are still uncertain. Drawing on a combination of molecular and phenotypic diversity data, modeling of past climate suitability and existing literature, we present an integrated hypothesis about peach palm's domestication. We support a single initial domestication event in south western Amazonia, giving rise to var. chichagui type 3, the putative incipient domesticate. We argue that subsequent dispersal by humans across western Amazonia, and possibly into Central America allowed for secondary domestication events through hybridization with resident wild populations, and differential human selection pressures, resulting in the diversity of present-day landraces. The high phenotypic diversity in the Ecuadorian and northern Peruvian Amazon suggest that human selection of different traits was particularly intense there. While acknowledging the need for further data collection, we believe that our results contribute new insights and tools to understand domestication and dispersal patterns of this important native staple, as well as to plan for its conservation.

  2. An Integrated Hypothesis on the Domestication of Bactris gasipaes.

    Directory of Open Access Journals (Sweden)

    Gea Galluzzi

    Full Text Available Peach palm (Bactris gasipaes Kunth has had a central place in the livelihoods of people in the Americas since pre-Columbian times, notably for its edible fruits and multi-purpose wood. The botanical taxon includes both domesticated and wild varieties. Domesticated var gasipaes is believed to derive from one or more of the three wild types of var. chichagui identified today, although the exact dynamics and location of the domestication are still uncertain. Drawing on a combination of molecular and phenotypic diversity data, modeling of past climate suitability and existing literature, we present an integrated hypothesis about peach palm's domestication. We support a single initial domestication event in south western Amazonia, giving rise to var. chichagui type 3, the putative incipient domesticate. We argue that subsequent dispersal by humans across western Amazonia, and possibly into Central America allowed for secondary domestication events through hybridization with resident wild populations, and differential human selection pressures, resulting in the diversity of present-day landraces. The high phenotypic diversity in the Ecuadorian and northern Peruvian Amazon suggest that human selection of different traits was particularly intense there. While acknowledging the need for further data collection, we believe that our results contribute new insights and tools to understand domestication and dispersal patterns of this important native staple, as well as to plan for its conservation.

  3. The estrogen hypothesis of schizophrenia implicates glucose metabolism

    DEFF Research Database (Denmark)

    Olsen, Line; Hansen, Thomas; Jakobsen, Klaus D

    2008-01-01

    implicated by the candidate genes resulting from the estrogen selection. We identified ten candidate genes using this approach that are all active in glucose metabolism and particularly in the glycolysis. Thus, we tested the hypothesis that variants of the glycolytic genes are associated with schizophrenia...

  4. The "Discouraged-Business-Major" Hypothesis: Policy Implications

    Science.gov (United States)

    Marangos, John

    2012-01-01

    This paper uses a relatively large dataset of the stated academic major preferences of economics majors at a relatively large, not highly selective, public university in the USA to identify the "discouraged-business-majors" (DBMs). The DBM hypothesis addresses the phenomenon where students who are screened out of the business curriculum often…

  5. Model selection for dynamical systems via sparse regression and information criteria.

    Science.gov (United States)

    Mangan, N M; Kutz, J N; Brunton, S L; Proctor, J L

    2017-08-01

    We develop an algorithm for model selection which allows for the consideration of a combinatorially large number of candidate models governing a dynamical system. The innovation circumvents a disadvantage of standard model selection which typically limits the number of candidate models considered due to the intractability of computing information criteria. Using a recently developed sparse identification of nonlinear dynamics algorithm, the sub-selection of candidate models near the Pareto frontier allows feasible computation of Akaike information criteria (AIC) or Bayes information criteria scores for the remaining candidate models. The information criteria hierarchically ranks the most informative models, enabling the automatic and principled selection of the model with the strongest support in relation to the time-series data. Specifically, we show that AIC scores place each candidate model in the strong support , weak support or no support category. The method correctly recovers several canonical dynamical systems, including a susceptible-exposed-infectious-recovered disease model, Burgers' equation and the Lorenz equations, identifying the correct dynamical system as the only candidate model with strong support.

  6. Testing the niche variation hypothesis with a measure of body condition

    Science.gov (United States)

    Individual variation and fitness are cornerstones of evolution by natural selection. The niche variation hypothesis (NVH) posits that when interspecific competition is relaxed, intraspecific competition should drive niche expansion by selection favoring use of novel resources. Po...

  7. On market timing and portfolio selectivity: modifying the Henriksson-Merton model

    OpenAIRE

    Goś, Krzysztof

    2011-01-01

    This paper evaluates selected functionalities of the parametrical Henriksson-Merton test, a tool designed for measuring the market timing and portfolio selectivity capabilities. It also provides a solution to two significant disadvantages of the model: relatively indirect interpretation and vulnerability to parameter insignificance. The model has been put to test on a group of Polish mutual funds in a period of 63 months (January 2004 – March 2009), providing unsatisfa...

  8. Altered expression of 3-betahydroxysterol delta-24-reductase/selective Alzheimer's disease indicator-1 gene in Huntington's disease models.

    Science.gov (United States)

    Samara, Athina; Galbiati, Mariarita; Luciani, Paola; Deledda, Cristiana; Messi, Elio; Peri, Alessandro; Maggi, Roberto

    2014-08-01

    3-betahydroxysterol delta-24-reductase (DHCR24), also called selective Alzheimer's disease indicator-1, is a crucial enzyme in cholesterol biosynthesis with neuroprotective properties that is downregulated in brain areas affected by Alzheimer's disease. In the present study, we investigated modifications of DHCR24 expression in models of Huntington's disease (HD), a neurodegenerative disorder caused by a polyglutamine expansion in huntingtin (Htt) protein that induces degeneration of cerebral cortex and striatum as well as lateral hypothalamic abnormality. Basal expression of DHCR24 and its modulation after oxidative stress were evaluated in rat striatal precursors cells (ST14A) transfected with wild-type (Htt) or mutant Htt (mHtt) and in brain tissue of an HD mouse model (R6/2). The results showed that DHCR24 transcript levels were decreased in ST14A cells expressing mHtt and in the brain of symptomatic R6/2 mice, but were significantly increased in ST14A cells overexpressing wild-type Htt. In addition, we demonstrated that, in the striatal precursors, the decrease of DHCR24 expression in response to oxidative stress was modified according to the presence of Htt or of its mutant form. Preliminary results indicated a modification of DHCR24 expression in post-mortem brain samples of HD patients. In conclusion, these results support the hypothesis of a possible role of DHCR24 in HD.

  9. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  10. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  11. Effects of hypothesis and assigned task on question selection strategies.

    NARCIS (Netherlands)

    Meertens, R.W.; Koomen, W.; Delpeut, A.P.; Hager, G.A.

    1980-01-01

    70 undergraduates participated in an experiment in which they were provided with an extrovert profile (1-sided task condition) or an extrovert profile together with an introvert profile (2-sided task condition). Ss received information about a male target person, who was described either as an

  12. Ecological Hypothesis of Dentin and Root Caries.

    Science.gov (United States)

    Takahashi, Nobuhiro; Nyvad, Bente

    2016-01-01

    Recent advances regarding the caries process indicate that ecological phenomena induced by bacterial acid production tilt the de- and remineralization balance of the dental hard tissues towards demineralization through bacterial acid-induced adaptation and selection within the microbiota - from the dynamic stability stage to the aciduric stage via the acidogenic stage [Takahashi and Nyvad, 2008]. Dentin and root caries can also be partly explained by this hypothesis; however, the fact that these tissues contain a considerable amount of organic material suggests that protein degradation is involved in caries formation. In this review, we compiled relevant histological, biochemical, and microbiological information about dentin/root caries and refined the hypothesis by adding degradation of the organic matrix (the proteolytic stage) to the abovementioned stages. Bacterial acidification not only induces demineralization and exposure of the organic matrix in dentin/root surfaces but also activation of dentin-embedded and salivary matrix metalloproteinases and cathepsins. These phenomena initiate degradation of the demineralized organic matrix in dentin/root surfaces. While a bacterial involvement has never been confirmed in the initial degradation of organic material, the detection of proteolytic/amino acid-degrading bacteria and bacterial metabolites in dentin and root caries suggests a bacterial digestion and metabolism of partly degraded matrix. Moreover, bacterial metabolites might induce pulpitis as an inflammatory/immunomodulatory factor. Root and dentin surfaces are always at risk of becoming demineralized in the oral cavity, and exposed organic materials can be degraded by host-derived proteases contained in saliva and dentin itself. New approaches to the prevention and treatment of root/dentin caries are required. © 2016 S. Karger AG, Basel.

  13. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  14. A concurrent optimization model for supplier selection with fuzzy quality loss

    Energy Technology Data Exchange (ETDEWEB)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-07-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  15. Bankruptcy prediction using SVM models with a new approach to combine features selection and parameter optimisation

    Science.gov (United States)

    Zhou, Ligang; Keung Lai, Kin; Yen, Jerome

    2014-03-01

    Due to the economic significance of bankruptcy prediction of companies for financial institutions, investors and governments, many quantitative methods have been used to develop effective prediction models. Support vector machine (SVM), a powerful classification method, has been used for this task; however, the performance of SVM is sensitive to model form, parameter setting and features selection. In this study, a new approach based on direct search and features ranking technology is proposed to optimise features selection and parameter setting for 1-norm and least-squares SVM models for bankruptcy prediction. This approach is also compared to the SVM models with parameter optimisation and features selection by the popular genetic algorithm technique. The experimental results on a data set with 2010 instances show that the proposed models are good alternatives for bankruptcy prediction.

  16. A concurrent optimization model for supplier selection with fuzzy quality loss

    International Nuclear Information System (INIS)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-01-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  17. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Science.gov (United States)

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic

  18. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Directory of Open Access Journals (Sweden)

    Shiori Yabe

    Full Text Available Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS, which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the

  19. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    Science.gov (United States)

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  20. STUDY CONCERNING THE ELABORATION OF CERTAIN ORIENTATION MODELS AND THE INITIAL SELECTION FOR SPEED SKATING

    Directory of Open Access Journals (Sweden)

    Vaida Marius

    2009-12-01

    Full Text Available In realizing this study I started from the premise that, by elaborating certain orientation models and initial selection for the speed skating and their application will appear superior results, necessary results, taking into account the actual evolution of the high performance sport in general and of the speed skating, in special.The target of this study has been the identification of an orientation model and a complete initial selection that should be based on the favorable aptitudes of the speed skating. On the basis of the made researched orientation models and initial selection has been made, things that have been demonstrated experimental that are not viable, the study starting from the data of the 120 copies, the complete experiment being made by 32 subjects separated in two groups, one using the proposed model and the other formed fromsubjects randomly selected.These models can serve as common working instruments both for the orientation process and for the initial selection one, being able to integrate in the proper practical activity, these being used easily both by coaches that are in charge with the proper selection of the athletes but also by the physical education teachers orschool teachers that are in contact with children of an early age.

  1. Odor preference learning and memory modify GluA1 phosphorylation and GluA1 distribution in the neonate rat olfactory bulb: testing the AMPA receptor hypothesis in an appetitive learning model.

    Science.gov (United States)

    Cui, Wen; Darby-King, Andrea; Grimes, Matthew T; Howland, John G; Wang, Yu Tian; McLean, John H; Harley, Carolyn W

    2011-01-01

    An increase in synaptic AMPA receptors is hypothesized to mediate learning and memory. AMPA receptor increases have been reported in aversive learning models, although it is not clear if they are seen with memory maintenance. Here we examine AMPA receptor changes in a cAMP/PKA/CREB-dependent appetitive learning model: odor preference learning in the neonate rat. Rat pups were given a single pairing of peppermint and 2 mg/kg isoproterenol, which produces a 24-h, but not a 48-h, peppermint preference in the 7-d-old rat pup. GluA1 PKA-dependent phosphorylation peaked 10 min after the 10-min training trial and returned to baseline within 90 min. At 24 h, GluA1 subunits did not change overall but were significantly increased in synaptoneurosomes, consistent with increased membrane insertion. Immunohistochemistry revealed a significant increase in GluA1 subunits in olfactory bulb glomeruli, the targets of olfactory nerve axons. Glomerular increases were seen at 3 and 24 h after odor exposure in trained pups, but not in control pups. GluA1 increases were not seen as early as 10 min after training and were no longer observed 48 h after training when odor preference is no longer expressed behaviorally. Thus, the pattern of increased GluA1 membrane expression closely follows the memory timeline. Further, blocking GluA1 insertion using an interference peptide derived from the carboxyl tail of the GluA1 subunit inhibited 24 h odor preference memory providing causative support for our hypothesis. PKA-mediated GluA1 phosphorylation and later GluA1 insertion could, conjointly, provide increased AMPA function to support both short-term and long-term appetitive memory.

  2. The linear hypothesis - an idea whose time has passed

    International Nuclear Information System (INIS)

    Tschaeche, A.N.

    1995-01-01

    The linear no-threshold hypothesis is the basis for radiation protection standards in the United States. In the words of the National Council on Radiation Protection and Measurements (NCRP), the hypothesis is: open-quotes In the interest of estimating effects in humans conservatively, it is not unreasonable to follow the assumption of a linear relationship between dose and effect in the low dose regions for which direct observational data are not available.close quotes The International Commission on Radiological Protection (ICRP) stated the hypothesis in a slightly different manner: open-quotes One such basic assumption ... is that ... there is ... a linear relationship without threshold between dose and the probability of an effect. The hypothesis was necessary 50 yr ago when it was first enunciated because the dose-effect curve for ionizing radiation for effects in humans was not known. The ICRP and NCRP needed a model to extrapolate high-dose effects to low-dose effects. So the linear no-threshold hypothesis was born. Certain details of the history of the development and use of the linear hypothesis are presented. In particular, use of the hypothesis by the U.S. regulatory agencies is examined. Over time, the sense of the hypothesis has been corrupted. The corruption of the hypothesis into the current paradigm of open-quote a little radiation, no matter how small, can and will harm youclose quotes is presented. The reasons the corruption occurred are proposed. The effects of the corruption are enumerated, specifically, the use of the corruption by the antinuclear forces in the United States and some of the huge costs to U.S. taxpayers due to the corruption. An alternative basis for radiation protection standards to assure public safety, based on the weight of scientific evidence on radiation health effects, is proposed

  3. Introducing the refined gravity hypothesis of extreme sexual size dimorphism

    Directory of Open Access Journals (Sweden)

    Corcobado Guadalupe

    2010-08-01

    Full Text Available Abstract Background Explanations for the evolution of female-biased, extreme Sexual Size Dimorphism (SSD, which has puzzled researchers since Darwin, are still controversial. Here we propose an extension of the Gravity Hypothesis (i.e., the GH, which postulates a climbing advantage for small males that in conjunction with the fecundity hypothesis appears to have the most general power to explain the evolution of SSD in spiders so far. In this "Bridging GH" we propose that bridging locomotion (i.e., walking upside-down under own-made silk bridges may be behind the evolution of extreme SSD. A biomechanical model shows that there is a physical constraint for large spiders to bridge. This should lead to a trade-off between other traits and dispersal in which bridging would favor smaller sizes and other selective forces (e.g. fecundity selection in females would favor larger sizes. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. We predicted that both large males and females would show a lower propensity to bridge, and that SSD would be negatively correlated with sexual dimorphism in bridging propensity. To test these hypotheses we experimentally induced bridging in males and females of 13 species of spiders belonging to the two clades in which bridging locomotion has evolved independently and in which most of the cases of extreme SSD in spiders are found. Results We found that 1 as the degree of SSD increased and females became larger, females tended to bridge less relative to males, and that 2 smaller males and females show a higher propensity to bridge. Conclusions Physical constraints make bridging inefficient for large spiders. Thus, in species where bridging is a very common mode of locomotion, small males, by being more efficient at bridging, will be competitively superior and enjoy more mating opportunities. This "Bridging GH" helps to solve the controversial question of

  4. Introducing the refined gravity hypothesis of extreme sexual size dimorphism.

    Science.gov (United States)

    Corcobado, Guadalupe; Rodríguez-Gironés, Miguel A; De Mas, Eva; Moya-Laraño, Jordi

    2010-08-03

    Explanations for the evolution of female-biased, extreme Sexual Size Dimorphism (SSD), which has puzzled researchers since Darwin, are still controversial. Here we propose an extension of the Gravity Hypothesis (i.e., the GH, which postulates a climbing advantage for small males) that in conjunction with the fecundity hypothesis appears to have the most general power to explain the evolution of SSD in spiders so far. In this "Bridging GH" we propose that bridging locomotion (i.e., walking upside-down under own-made silk bridges) may be behind the evolution of extreme SSD. A biomechanical model shows that there is a physical constraint for large spiders to bridge. This should lead to a trade-off between other traits and dispersal in which bridging would favor smaller sizes and other selective forces (e.g. fecundity selection in females) would favor larger sizes. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. We predicted that both large males and females would show a lower propensity to bridge, and that SSD would be negatively correlated with sexual dimorphism in bridging propensity. To test these hypotheses we experimentally induced bridging in males and females of 13 species of spiders belonging to the two clades in which bridging locomotion has evolved independently and in which most of the cases of extreme SSD in spiders are found. We found that 1) as the degree of SSD increased and females became larger, females tended to bridge less relative to males, and that 2) smaller males and females show a higher propensity to bridge. Physical constraints make bridging inefficient for large spiders. Thus, in species where bridging is a very common mode of locomotion, small males, by being more efficient at bridging, will be competitively superior and enjoy more mating opportunities. This "Bridging GH" helps to solve the controversial question of what keeps males small and also contributes to

  5. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  6. Modeling of Clostridium tyrobutyricum for Butyric Acid Selectivity in Continuous Fermentation

    Directory of Open Access Journals (Sweden)

    Jianjun Du

    2014-04-01

    Full Text Available A mathematical model was developed to describe batch and continuous fermentation of glucose to organic acids with Clostridium tyrobutyricum. A modified Monod equation was used to describe cell growth, and a Luedeking-Piret equation was used to describe the production of butyric and acetic acids. Using the batch fermentation equations, models predicting butyric acid selectivity for continuous fermentation were also developed. The model showed that butyric acid production was a strong function of cell mass, while acetic acid production was a function of cell growth rate. Further, it was found that at high acetic acid concentrations, acetic acid was metabolized to butyric acid and that this conversion could be modeled. In batch fermentation, high butyric acid selectivity occurred at high initial cell or glucose concentrations. In continuous fermentation, decreased dilution rate improved selectivity; at a dilution rate of 0.028 h−1, the selectivity reached 95.8%. The model and experimental data showed that at total cell recycle, the butyric acid selectivity could reach 97.3%. This model could be used to optimize butyric acid production using C. tyrobutyricum in a continuous fermentation scheme. This is the first study that mathematically describes batch, steady state, and dynamic behavior of C. tyrobutyricum for butyric acid production.

  7. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  8. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Selecting ELL Textbooks: A Content Analysis of Language-Teaching Models

    Science.gov (United States)

    LaBelle, Jeffrey T.

    2011-01-01

    Many middle school teachers lack adequate criteria to critically select materials that represent a variety of L2 teaching models. This study analyzes the illustrated and written content of 33 ELL textbooks to determine the range of L2 teaching models represented. The researchers asked to what extent do middle school ELL texts depict frequency and…

  11. Parameter subset selection for the dynamic calibration of activated sludge models (ASMs): experience versus systems analysis

    DEFF Research Database (Denmark)

    Ruano, MV; Ribes, J; de Pauw, DJW

    2007-01-01

    In this work we address the issue of parameter subset selection within the scope of activated sludge model calibration. To this end, we evaluate two approaches: (i) systems analysis and (ii) experience-based approach. The evaluation has been carried out using a dynamic model (ASM2d) calibrated...

  12. Discounting model selection with area-based measures: A case for numerical integration.

    Science.gov (United States)

    Gilroy, Shawn P; Hantula, Donald A

    2018-03-01

    A novel method for analyzing delay discounting data is proposed. This newer metric, a model-based Area Under Curve (AUC) combining approximate Bayesian model selection and numerical integration, was compared to the point-based AUC methods developed by Myerson, Green, and Warusawitharana (2001) and extended by Borges, Kuang, Milhorn, and Yi (2016). Using data from computer simulation and a published study, comparisons of these methods indicated that a model-based form of AUC offered a more consistent and statistically robust measurement of area than provided by using point-based methods alone. Beyond providing a form of AUC directly from a discounting model, numerical integration methods permitted a general calculation in cases when the Effective Delay 50 (ED50) measure could not be calculated. This allowed discounting model selection to proceed in conditions where data are traditionally more challenging to model and measure, a situation where point-based AUC methods are often enlisted. Results from simulation and existing data indicated that numerical integration methods extended both the area-based interpretation of delay discounting as well as the discounting model selection approach. Limitations of point-based AUC as a first-line analysis of discounting and additional extensions of discounting model selection were also discussed. © 2018 Society for the Experimental Analysis of Behavior.

  13. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  14. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  15. Training Self-Regulated Learning Skills with Video Modeling Examples: Do Task-Selection Skills Transfer?

    Science.gov (United States)

    Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara

    2018-01-01

    Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…

  16. A hidden Markov model for investigating recent positive selection through haplotype structure.

    Science.gov (United States)

    Chen, Hua; Hey, Jody; Slatkin, Montgomery

    2015-02-01

    Recent positive selection can increase the frequency of an advantageous mutant rapidly enough that a relatively long ancestral haplotype will be remained intact around it. We present a hidden Markov model (HMM) to identify such haplotype structures. With HMM identified haplotype structures, a population genetic model for the extent of ancestral haplotypes is then adopted for parameter inference of the selection intensity and the allele age. Simulations show that this method can detect selection under a wide range of conditions and has higher power than the existing frequency spectrum-based method. In addition, it provides good estimate of the selection coefficients and allele ages for strong selection. The method analyzes large data sets in a reasonable amount of running time. This method is applied to HapMap III data for a genome scan, and identifies a list of candidate regions putatively under recent positive selection. It is also applied to several genes known to be under recent positive selection, including the LCT, KITLG and TYRP1 genes in Northern Europeans, and OCA2 in East Asians, to estimate their allele ages and selection coefficients. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A finite volume alternate direction implicit approach to modeling selective laser melting

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Mohanty, Sankhya

    2013-01-01

    Over the last decade, several studies have attempted to develop thermal models for analyzing the selective laser melting process with a vision to predict thermal stresses, microstructures and resulting mechanical properties of manufactured products. While a holistic model addressing all involved...... is proposed for modeling single-layer and few-layers selective laser melting processes. The ADI technique is implemented and applied for two cases involving constant material properties and non-linear material behavior. The ADI FV method consume less time while having comparable accuracy with respect to 3D...

  18. SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.

    Science.gov (United States)

    Weight, Michael D; Harpending, Henry

    2017-01-01

    The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.

  19. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    Science.gov (United States)

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. A Multi-objective model for selection of projects to finance new enterprise SMEs in Colombia

    Directory of Open Access Journals (Sweden)

    J.R. Coronado-Hernández

    2011-10-01

    Full Text Available Purpose: This paper presents a multi-objective programming model for selection of Projects for Financing New Enterprise SMEs in Colombia with objectivity and transparency in every call. Approach: The model has four social objectives, subject to constraint budget and to the requirements of every summons. The resolution procedure for the model is based on principles of goal programming. Findings: Selection projects subject to the impact within the country. Research limitations: The selection of the projects is restricted by a legal framework, the terms of reference and the budget of the summons. Practical implications: The projects must be viable according to the characteristics of every summons. Originality/value: The suggested model offers an alternative for entities that need to evaluate projects of co-financing for the managerial development of the SMEs with more objectivity and transparency in the assignment of resources.