WorldWideScience

Sample records for parsimonious structural model

  1. Parsimonious relevance models

    NARCIS (Netherlands)

    Meij, E.; Weerkamp, W.; Balog, K.; de Rijke, M.; Myang, S.-H.; Oard, D.W.; Sebastiani, F.; Chua, T.-S.; Leong, M.-K.

    2008-01-01

    We describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. We apply our method to six topic sets from test collections in five different genres. Our parsimonious relevance models (i) improve retrieval effectiveness in terms of

  2. A unifying model of genome evolution under parsimony.

    Science.gov (United States)

    Paten, Benedict; Zerbino, Daniel R; Hickey, Glenn; Haussler, David

    2014-06-19

    Parsimony and maximum likelihood methods of phylogenetic tree estimation and parsimony methods for genome rearrangements are central to the study of genome evolution yet to date they have largely been pursued in isolation. We present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. It conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join (DCJ) rearrangements in the presence of duplications. The problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. We show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and DCJ rearrangements needed to explain any history graph. These bounds become tight for a special type of unambiguous history graph called an ancestral variation graph (AVG), which constrains in its combinatorial structure the number of operations required. We finally demonstrate that for a given history graph G, a finite set of AVGs describe all parsimonious interpretations of G, and this set can be explored with a few sampling moves. This theoretical study describes a model in which the inference of genome rearrangements and phylogeny can be unified under parsimony.

  3. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  4. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  5. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  6. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  7. A simplified parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  8. A tridiagonal parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.

  9. Dirichlet Process Parsimonious Mixtures for clustering

    OpenAIRE

    Chamroukhi, Faicel; Bartcus, Marius; Glotin, Hervé

    2015-01-01

    The parsimonious Gaussian mixture models, which exploit an eigenvalue decomposition of the group covariance matrices of the Gaussian mixture, have shown their success in particular in cluster analysis. Their estimation is in general performed by maximum likelihood estimation and has also been considered from a parametric Bayesian prospective. We propose new Dirichlet Process Parsimonious mixtures (DPPM) which represent a Bayesian nonparametric formulation of these parsimonious Gaussian mixtur...

  10. A class representative model for Pure Parsimony Haplotyping under uncertain data.

    Directory of Open Access Journals (Sweden)

    Daniele Catanzaro

    Full Text Available The Pure Parsimony Haplotyping (PPH problem is a NP-hard combinatorial optimization problem that consists of finding the minimum number of haplotypes necessary to explain a given set of genotypes. PPH has attracted more and more attention in recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from mapping complex disease genes to inferring population histories, passing through designing drugs, functional genomics and pharmacogenetics. In this article we investigate, for the first time, a recent version of PPH called the Pure Parsimony Haplotype problem under Uncertain Data (PPH-UD. This version mainly arises when the input genotypes are not accurate, i.e., when some single nucleotide polymorphisms are missing or affected by errors. We propose an exact approach to solution of PPH-UD based on an extended version of Catanzaro et al.[1] class representative model for PPH, currently the state-of-the-art integer programming model for PPH. The model is efficient, accurate, compact, polynomial-sized, easy to implement, solvable with any solver for mixed integer programming, and usable in all those cases for which the parsimony criterion is well suited for haplotype estimation.

  11. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    Science.gov (United States)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  12. Seeking parsimony in hydrology and water resources technology

    Science.gov (United States)

    Koutsoyiannis, D.

    2009-04-01

    The principle of parsimony, also known as the principle of simplicity, the principle of economy and Ockham's razor, advises scientists to prefer the simplest theory among those that fit the data equally well. In this, it is an epistemic principle but reflects an ontological characterization that the universe is ultimately parsimonious. Is this principle useful and can it really be reconciled with, and implemented to, our modelling approaches of complex hydrological systems, whose elements and events are extraordinarily numerous, different and unique? The answer underlying the mainstream hydrological research of the last two decades seems to be negative. Hopes were invested to the power of computers that would enable faithful and detailed representation of the diverse system elements and the hydrological processes, based on merely "first principles" and resulting in "physically-based" models that tend to approach in complexity the real world systems. Today the account of such research endeavour seems not positive, as it did not improve model predictive capacity and processes comprehension. A return to parsimonious modelling seems to be again the promising route. The experience from recent research and from comparisons of parsimonious and complicated models indicates that the former can facilitate insight and comprehension, improve accuracy and predictive capacity, and increase efficiency. In addition - and despite aspiration that "physically based" models will have lower data requirements and, even, they ultimately become "data-free" - parsimonious models require fewer data to achieve the same accuracy with more complicated models. Naturally, the concepts that reconcile the simplicity of parsimonious models with the complexity of hydrological systems are probability theory and statistics. Probability theory provides the theoretical basis for moving from a microscopic to a macroscopic view of phenomena, by mapping sets of diverse elements and events of hydrological

  13. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  14. Quality Assurance Based on Descriptive and Parsimonious Appearance Models

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Eiríksson, Eyþór Rúnar; Kristensen, Rasmus Lyngby

    2015-01-01

    In this positional paper, we discuss the potential benefits of using appearance models in additive manufacturing, metal casting, wind turbine blade production, and 3D content acquisition. Current state of the art in acquisition and rendering of appearance cannot easily be used for quality assurance...... in these areas. The common denominator is the need for descriptive and parsimonious appearance models. By ‘parsimonious’ we mean with few parameters so that a model is useful both for fast acquisition, robust fitting, and fast rendering of appearance. The word ‘descriptive’ refers to the fact that a model should...

  15. A parsimonious dynamic model for river water quality assessment.

    Science.gov (United States)

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.

  16. The plunge in German electricity futures prices – Analysis using a parsimonious fundamental model

    International Nuclear Information System (INIS)

    Kallabis, Thomas; Pape, Christian; Weber, Christoph

    2016-01-01

    The German market has seen a plunge in wholesale electricity prices from 2007 until 2014, with base futures prices dropping by more than 40%. This is frequently attributed to the unexpected high increase in renewable power generation. Using a parsimonious fundamental model, we determine the respective impact of supply and demand shocks on electricity futures prices. The used methodology is based on a piecewise linear approximation of the supply stack and time-varying price-inelastic demand. This parsimonious model is able to replicate electricity futures prices and discover non-linear dependencies in futures price formation. We show that emission prices have a higher impact on power prices than renewable penetration. Changes in renewables, demand and installed capacities turn out to be similarly important for explaining the decrease in operation margins of conventional power plants. We thus argue for the establishment of an independent authority to stabilize emission prices. - Highlights: •We build a parsimonious fundamental model based on a piecewise linear bid stack. •We use the model to investigate impact factors for the plunge in German futures prices. •Largest impact by CO_2 price developments followed by demand and renewable feed-in. •Power plant operating profits strongly affected by demand and renewables. •We argue that stabilizing CO_2 emission prices could provide better market signals.

  17. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  18. Parsimonious Surface Wave Interferometry

    KAUST Repository

    Li, Jing

    2017-10-24

    To decrease the recording time of a 2D seismic survey from a few days to one hour or less, we present a parsimonious surface-wave interferometry method. Interferometry allows for the creation of a large number of virtual shot gathers from just two reciprocal shot gathers by crosscoherence of trace pairs, where the virtual surface waves can be inverted for the S-wave velocity model by wave-equation dispersion inversion (WD). Synthetic and field data tests suggest that parsimonious wave-equation dispersion inversion (PWD) gives S-velocity tomograms that are comparable to those obtained from a full survey with a shot at each receiver. The limitation of PWD is that the virtual data lose some information so that the resolution of the S-velocity tomogram can be modestly lower than that of the S-velocity tomogram inverted from a conventional survey.

  19. Parsimonious Surface Wave Interferometry

    KAUST Repository

    Li, Jing; Hanafy, Sherif; Schuster, Gerard T.

    2017-01-01

    To decrease the recording time of a 2D seismic survey from a few days to one hour or less, we present a parsimonious surface-wave interferometry method. Interferometry allows for the creation of a large number of virtual shot gathers from just two reciprocal shot gathers by crosscoherence of trace pairs, where the virtual surface waves can be inverted for the S-wave velocity model by wave-equation dispersion inversion (WD). Synthetic and field data tests suggest that parsimonious wave-equation dispersion inversion (PWD) gives S-velocity tomograms that are comparable to those obtained from a full survey with a shot at each receiver. The limitation of PWD is that the virtual data lose some information so that the resolution of the S-velocity tomogram can be modestly lower than that of the S-velocity tomogram inverted from a conventional survey.

  20. SEAPODYM-LTL: a parsimonious zooplankton dynamic biomass model

    Science.gov (United States)

    Conchon, Anna; Lehodey, Patrick; Gehlen, Marion; Titaud, Olivier; Senina, Inna; Séférian, Roland

    2017-04-01

    Mesozooplankton organisms are of critical importance for the understanding of early life history of most fish stocks, as well as the nutrient cycles in the ocean. Ongoing climate change and the need for improved approaches to the management of living marine resources has driven recent advances in zooplankton modelling. The classical modeling approach tends to describe the whole biogeochemical and plankton cycle with increasing complexity. We propose here a different and parsimonious zooplankton dynamic biomass model (SEAPODYM-LTL) that is cost efficient and can be advantageously coupled with primary production estimated either from satellite derived ocean color data or biogeochemical models. In addition, the adjoint code of the model is developed allowing a robust optimization approach for estimating the few parameters of the model. In this study, we run the first optimization experiments using a global database of climatological zooplankton biomass data and we make a comparative analysis to assess the importance of resolution and primary production inputs on model fit to observations. We also compare SEAPODYM-LTL outputs to those produced by a more complex biogeochemical model (PISCES) but sharing the same physical forcings.

  1. A new mathematical modeling for pure parsimony haplotyping problem.

    Science.gov (United States)

    Feizabadi, R; Bagherian, M; Vaziri, H R; Salahi, M

    2016-11-01

    Pure parsimony haplotyping (PPH) problem is important in bioinformatics because rational haplotyping inference plays important roles in analysis of genetic data, mapping complex genetic diseases such as Alzheimer's disease, heart disorders and etc. Haplotypes and genotypes are m-length sequences. Although several integer programing models have already been presented for PPH problem, its NP-hardness characteristic resulted in ineffectiveness of those models facing the real instances especially instances with many heterozygous sites. In this paper, we assign a corresponding number to each haplotype and genotype and based on those numbers, we set a mixed integer programing model. Using numbers, instead of sequences, would lead to less complexity of the new model in comparison with previous models in a way that there are neither constraints nor variables corresponding to heterozygous nucleotide sites in it. Experimental results approve the efficiency of the new model in producing better solution in comparison to two state-of-the art haplotyping approaches. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Reconstructing phylogenetic networks using maximum parsimony.

    Science.gov (United States)

    Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John

    2005-01-01

    Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.

  3. Pengintegrasian Model Leadership Menuju Model yang Lebih Komprhensip dan Parsimoni

    Directory of Open Access Journals (Sweden)

    Miswanto Miswanti

    2016-06-01

    Full Text Available ABTSRACT Through leadership models offered by Locke et. al (1991 we can say that whether good or not the vision of leaders in the organization is highly dependent on whether good or not the motives and traits, knowledge, skill, and abilities owned leaders. Then, good or not the implementation of the vision by the leader depends on whether good or not the motives and traits, knowledge, skills, abilities, and the vision of the leaders. Strategic Leadership written by Davies (1991 states that the implementation of the vision by using strategic leadership, the meaning is much more complete than what has been written by Locke et. al. in the fourth stage of leadership. Thus, aspects of the implementation of the vision by Locke et al (1991 it is not complete implementation of the vision according to Davies (1991. With the considerations mentioned above, this article attempts to combine the leadership model of the Locke et. al and strategic leadership of the Davies. With this modification is expected to be an improvement model of leadership is more comprehensive and parsimony.

  4. Beyond technology acceptance to effective technology use: a parsimonious and actionable model.

    Science.gov (United States)

    Holahan, Patricia J; Lesselroth, Blake J; Adams, Kathleen; Wang, Kai; Church, Victoria

    2015-05-01

    To develop and test a parsimonious and actionable model of effective technology use (ETU). Cross-sectional survey of primary care providers (n = 53) in a large integrated health care organization that recently implemented new medication reconciliation technology. Surveys assessed 5 technology-related perceptions (compatibility with work values, implementation climate, compatibility with work processes, perceived usefulness, and ease of use) and 1 outcome variable, ETU. ETU was measured as both consistency and quality of technology use. Compatibility with work values and implementation climate were found to have differential effects on consistency and quality of use. When implementation climate was strong, consistency of technology use was high. However, quality of technology use was high only when implementation climate was strong and values compatibility was high. This is an important finding and highlights the importance of users' workplace values as a key determinant of quality of use. To extend our effectiveness in implementing new health care information technology, we need parsimonious models that include actionable determinants of ETU and account for the differential effects of these determinants on the multiple dimensions of ETU. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  6. Time-Dependent-Asymmetric-Linear-Parsimonious Ancestral State Reconstruction.

    Science.gov (United States)

    Didier, Gilles

    2017-10-01

    The time-dependent-asymmetric-linear parsimony is an ancestral state reconstruction method which extends the standard linear parsimony (a.k.a. Wagner parsimony) approach by taking into account both branch lengths and asymmetric evolutionary costs for reconstructing quantitative characters (asymmetric costs amount to assuming an evolutionary trend toward the direction with the lowest cost). A formal study of the influence of the asymmetry parameter shows that the time-dependent-asymmetric-linear parsimony infers states which are all taken among the known states, except for some degenerate cases corresponding to special values of the asymmetry parameter. This remarkable property holds in particular for the Wagner parsimony. This study leads to a polynomial algorithm which determines, and provides a compact representation of, the parametric reconstruction of a phylogenetic tree, that is for all the unknown nodes, the set of all the possible reconstructed states associated with the asymmetry parameters leading to them. The time-dependent-asymmetric-linear parsimony is finally illustrated with the parametric reconstruction of the body size of cetaceans.

  7. Parsimonious Refraction Interferometry and Tomography

    KAUST Repository

    Hanafy, Sherif

    2017-02-04

    We present parsimonious refraction interferometry and tomography where a densely populated refraction data set can be obtained from two reciprocal and several infill shot gathers. The assumptions are that the refraction arrivals are head waves, and a pair of reciprocal shot gathers and several infill shot gathers are recorded over the line of interest. Refraction traveltimes from these shot gathers are picked and spawned into O(N2) virtual refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. The virtual traveltimes can be inverted to give the velocity tomogram. This enormous increase in the number of traveltime picks and associated rays, compared to the many fewer traveltimes from the reciprocal and infill shot gathers, allows for increased model resolution and a better condition number with the system of normal equations. A significant benefit is that the parsimonious survey and the associated traveltime picking is far less time consuming than that for a standard refraction survey with a dense distribution of sources.

  8. Assessing Internet addiction using the parsimonious Internet addiction components model - a preliminary study [forthcoming

    OpenAIRE

    Kuss, DJ; Shorter, GW; Van Rooij, AJ; Griffiths, MD; Schoenmakers, T

    2014-01-01

    Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), i...

  9. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  10. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  11. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  12. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Parsimonious refraction interferometry

    KAUST Repository

    Hanafy, Sherif

    2016-09-06

    We present parsimonious refraction interferometry where a densely populated refraction data set can be obtained from just two shot gathers. The assumptions are that the first arrivals are comprised of head waves and direct waves, and a pair of reciprocal shot gathers is recorded over the line of interest. The refraction traveltimes from these reciprocal shot gathers can be picked and decomposed into O(N2) refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. This enormous increase in the number of virtual traveltime picks and associated rays, compared to the 2N traveltimes from the two reciprocal shot gathers, allows for increased model resolution and better condition numbers in the normal equations. Also, a reciprocal survey is far less time consuming than a standard refraction survey with a dense distribution of sources.

  14. Parsimonious refraction interferometry

    KAUST Repository

    Hanafy, Sherif; Schuster, Gerard T.

    2016-01-01

    We present parsimonious refraction interferometry where a densely populated refraction data set can be obtained from just two shot gathers. The assumptions are that the first arrivals are comprised of head waves and direct waves, and a pair of reciprocal shot gathers is recorded over the line of interest. The refraction traveltimes from these reciprocal shot gathers can be picked and decomposed into O(N2) refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. This enormous increase in the number of virtual traveltime picks and associated rays, compared to the 2N traveltimes from the two reciprocal shot gathers, allows for increased model resolution and better condition numbers in the normal equations. Also, a reciprocal survey is far less time consuming than a standard refraction survey with a dense distribution of sources.

  15. Bayesian, Maximum Parsimony and UPGMA Models for Inferring the Phylogenies of Antelopes Using Mitochondrial Markers

    OpenAIRE

    Khan, Haseeb A.; Arif, Ibrahim A.; Bahkali, Ali H.; Al Farhan, Ahmad H.; Al Homaidan, Ali A.

    2008-01-01

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to B...

  16. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  17. Parsimonious Refraction Interferometry and Tomography

    KAUST Repository

    Hanafy, Sherif; Schuster, Gerard T.

    2017-01-01

    We present parsimonious refraction interferometry and tomography where a densely populated refraction data set can be obtained from two reciprocal and several infill shot gathers. The assumptions are that the refraction arrivals are head waves

  18. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  19. Efficient parsimony-based methods for phylogenetic network reconstruction.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-15

    Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.

  20. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  1. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  2. Prediction of dissolved reactive phosphorus losses from small agricultural catchments: calibration and validation of a parsimonious model

    Directory of Open Access Journals (Sweden)

    C. Hahn

    2013-10-01

    Full Text Available Eutrophication of surface waters due to diffuse phosphorus (P losses continues to be a severe water quality problem worldwide, causing the loss of ecosystem functions of the respective water bodies. Phosphorus in runoff often originates from a small fraction of a catchment only. Targeting mitigation measures to these critical source areas (CSAs is expected to be most efficient and cost-effective, but requires suitable tools. Here we investigated the capability of the parsimonious Rainfall-Runoff-Phosphorus (RRP model to identify CSAs in grassland-dominated catchments based on readily available soil and topographic data. After simultaneous calibration on runoff data from four small hilly catchments on the Swiss Plateau, the model was validated on a different catchment in the same region without further calibration. The RRP model adequately simulated the discharge and dissolved reactive P (DRP export from the validation catchment. Sensitivity analysis showed that the model predictions were robust with respect to the classification of soils into "poorly drained" and "well drained", based on the available soil map. Comparing spatial hydrological model predictions with field data from the validation catchment provided further evidence that the assumptions underlying the model are valid and that the model adequately accounts for the dominant P export processes in the target region. Thus, the parsimonious RRP model is a valuable tool that can be used to determine CSAs. Despite the considerable predictive uncertainty regarding the spatial extent of CSAs, the RRP can provide guidance for the implementation of mitigation measures. The model helps to identify those parts of a catchment where high DRP losses are expected or can be excluded with high confidence. Legacy P was predicted to be the dominant source for DRP losses and thus, in combination with hydrologic active areas, a high risk for water quality.

  3. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  4. Regularized Estimation of Structural Instability in Factor Models: The US Macroeconomy and the Great Moderation

    DEFF Research Database (Denmark)

    Callot, Laurent; Kristensen, Johannes Tang

    This paper shows that the parsimoniously time-varying methodology of Callot and Kristensen (2015) can be applied to factor models.We apply this method to study macroeconomic instability in the US from 1959:1 to 2006:4 with a particular focus on the Great Moderation. Models with parsimoniously time...... that the parameters of both models exhibit a higher degree of instability in the period from 1970:1 to 1984:4 relative to the following 15 years. In our setting the Great Moderation appears as the gradual ending of a period of high structural instability that took place in the 1970s and early 1980s....

  5. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    Science.gov (United States)

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  6. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  7. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  8. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  9. Failed refutations: further comments on parsimony and likelihood methods and their relationship to Popper's degree of corroboration.

    Science.gov (United States)

    de Queiroz, Kevin; Poe, Steven

    2003-06-01

    Kluge's (2001, Syst. Biol. 50:322-330) continued arguments that phylogenetic methods based on the statistical principle of likelihood are incompatible with the philosophy of science described by Karl Popper are based on false premises related to Kluge's misrepresentations of Popper's philosophy. Contrary to Kluge's conjectures, likelihood methods are not inherently verificationist; they do not treat every instance of a hypothesis as confirmation of that hypothesis. The historical nature of phylogeny does not preclude phylogenetic hypotheses from being evaluated using the probability of evidence. The low absolute probabilities of hypotheses are irrelevant to the correct interpretation of Popper's concept termed degree of corroboration, which is defined entirely in terms of relative probabilities. Popper did not advocate minimizing background knowledge; in any case, the background knowledge of both parsimony and likelihood methods consists of the general assumption of descent with modification and additional assumptions that are deterministic, concerning which tree is considered most highly corroborated. Although parsimony methods do not assume (in the sense of entailing) that homoplasy is rare, they do assume (in the sense of requiring to obtain a correct phylogenetic inference) certain things about patterns of homoplasy. Both parsimony and likelihood methods assume (in the sense of implying by the manner in which they operate) various things about evolutionary processes, although violation of those assumptions does not always cause the methods to yield incorrect phylogenetic inferences. Test severity is increased by sampling additional relevant characters rather than by character reanalysis, although either interpretation is compatible with the use of phylogenetic likelihood methods. Neither parsimony nor likelihood methods assess test severity (critical evidence) when used to identify a most highly corroborated tree(s) based on a single method or model and a

  10. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  11. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  12. FPGA Hardware Acceleration of a Phylogenetic Tree Reconstruction with Maximum Parsimony Algorithm

    OpenAIRE

    BLOCK, Henry; MARUYAMA, Tsutomu

    2017-01-01

    In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel an...

  13. Seeing the elephant: Parsimony, functionalism, and the emergent design of contempt and other sentiments.

    Science.gov (United States)

    Gervais, Matthew M; Fessler, Daniel M T

    2017-01-01

    The target article argues that contempt is a sentiment, and that sentiments are the deep structure of social affect. The 26 commentaries meet these claims with a range of exciting extensions and applications, as well as critiques. Most significantly, we reply that construction and emergence are necessary for, not incompatible with, evolved design, while parsimony requires explanatory adequacy and predictive accuracy, not mere simplicity.

  14. Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.

    Science.gov (United States)

    Mirzaei, Sajad; Wu, Yufeng

    2016-01-01

    Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.

  15. Urban micro-scale flood risk estimation with parsimonious hydraulic modelling and census data

    Directory of Open Access Journals (Sweden)

    C. Arrighi

    2013-05-01

    Full Text Available The adoption of 2007/60/EC Directive requires European countries to implement flood hazard and flood risk maps by the end of 2013. Flood risk is the product of flood hazard, vulnerability and exposure, all three to be estimated with comparable level of accuracy. The route to flood risk assessment is consequently much more than hydraulic modelling of inundation, that is hazard mapping. While hazard maps have already been implemented in many countries, quantitative damage and risk maps are still at a preliminary level. A parsimonious quasi-2-D hydraulic model is here adopted, having many advantages in terms of easy set-up. It is here evaluated as being accurate in flood depth estimation in urban areas with a high-resolution and up-to-date Digital Surface Model (DSM. The accuracy, estimated by comparison with marble-plate records of a historic flood in the city of Florence, is characterized in the downtown's most flooded area by a bias of a very few centimetres and a determination coefficient of 0.73. The average risk is found to be about 14 € m−2 yr−1, corresponding to about 8.3% of residents' income. The spatial distribution of estimated risk highlights a complex interaction between the flood pattern and the building characteristics. As a final example application, the estimated risk values have been used to compare different retrofitting measures. Proceeding through the risk estimation steps, a new micro-scale potential damage assessment method is proposed. This is based on the georeferenced census system as the optimal compromise between spatial detail and open availability of socio-economic data. The results of flood risk assessment at the census section scale resolve most of the risk spatial variability, and they can be easily aggregated to whatever upper scale is needed given that they are geographically defined as contiguous polygons. Damage is calculated through stage–damage curves, starting from census data on building type and

  16. Modeling the isotopic evolution of snowpack and snowmelt: Testing a spatially distributed parsimonious approach.

    Science.gov (United States)

    Ala-Aho, Pertti; Tetzlaff, Doerthe; McNamara, James P; Laudon, Hjalmar; Kormos, Patrick; Soulsby, Chris

    2017-07-01

    Use of stable water isotopes has become increasingly popular in quantifying water flow paths and travel times in hydrological systems using tracer-aided modeling. In snow-influenced catchments, snowmelt produces a traceable isotopic signal, which differs from original snowfall isotopic composition because of isotopic fractionation in the snowpack. These fractionation processes in snow are relatively well understood, but representing their spatiotemporal variability in tracer-aided studies remains a challenge. We present a novel, parsimonious modeling method to account for the snowpack isotope fractionation and estimate isotope ratios in snowmelt water in a fully spatially distributed manner. Our model introduces two calibration parameters that alone account for the isotopic fractionation caused by sublimation from interception and ground snow storage, and snowmelt fractionation progressively enriching the snowmelt runoff. The isotope routines are linked to a generic process-based snow interception-accumulation-melt model facilitating simulation of spatially distributed snowmelt runoff. We use a synthetic modeling experiment to demonstrate the functionality of the model algorithms in different landscape locations and under different canopy characteristics. We also provide a proof-of-concept model test and successfully reproduce isotopic ratios in snowmelt runoff sampled with snowmelt lysimeters in two long-term experimental catchment with contrasting winter conditions. To our knowledge, the method is the first such tool to allow estimation of the spatially distributed nature of isotopic fractionation in snowpacks and the resulting isotope ratios in snowmelt runoff. The method can thus provide a useful tool for tracer-aided modeling to better understand the integrated nature of flow, mixing, and transport processes in snow-influenced catchments.

  17. Assessing internet addiction using the parsimonious internet addiction components model—A preliminary study.

    OpenAIRE

    Kuss, D.J.; Shorter, G.W.; Rooij, A.J. van; Griffiths, M.D.; Schoenmakers, T.M.

    2014-01-01

    Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (Journal ...

  18. Philosophy and phylogenetic inference: a comparison of likelihood and parsimony methods in the context of Karl Popper's writings on corroboration.

    Science.gov (United States)

    de Queiroz, K; Poe, S

    2001-06-01

    Advocates of cladistic parsimony methods have invoked the philosophy of Karl Popper in an attempt to argue for the superiority of those methods over phylogenetic methods based on Ronald Fisher's statistical principle of likelihood. We argue that the concept of likelihood in general, and its application to problems of phylogenetic inference in particular, are highly compatible with Popper's philosophy. Examination of Popper's writings reveals that his concept of corroboration is, in fact, based on likelihood. Moreover, because probabilistic assumptions are necessary for calculating the probabilities that define Popper's corroboration, likelihood methods of phylogenetic inference--with their explicit probabilistic basis--are easily reconciled with his concept. In contrast, cladistic parsimony methods, at least as described by certain advocates of those methods, are less easily reconciled with Popper's concept of corroboration. If those methods are interpreted as lacking probabilistic assumptions, then they are incompatible with corroboration. Conversely, if parsimony methods are to be considered compatible with corroboration, then they must be interpreted as carrying implicit probabilistic assumptions. Thus, the non-probabilistic interpretation of cladistic parsimony favored by some advocates of those methods is contradicted by an attempt by the same authors to justify parsimony methods in terms of Popper's concept of corroboration. In addition to being compatible with Popperian corroboration, the likelihood approach to phylogenetic inference permits researchers to test the assumptions of their analytical methods (models) in a way that is consistent with Popper's ideas about the provisional nature of background knowledge.

  19. Principle of Parsimony, Fake Science, and Scales

    Science.gov (United States)

    Yeh, T. C. J.; Wan, L.; Wang, X. S.

    2017-12-01

    Considering difficulties in predicting exact motions of water molecules, and the scale of our interests (bulk behaviors of many molecules), Fick's law (diffusion concept) has been created to predict solute diffusion process in space and time. G.I. Taylor (1921) demonstrated that random motion of the molecules reach the Fickian regime in less a second if our sampling scale is large enough to reach ergodic condition. Fick's law is widely accepted for describing molecular diffusion as such. This fits the definition of the parsimony principle at the scale of our concern. Similarly, advection-dispersion or convection-dispersion equation (ADE or CDE) has been found quite satisfactory for analysis of concentration breakthroughs of solute transport in uniformly packed soil columns. This is attributed to the solute is often released over the entire cross-section of the column, which has sampled many pore-scale heterogeneities and met the ergodicity assumption. Further, the uniformly packed column contains a large number of stationary pore-size heterogeneity. The solute thus reaches the Fickian regime after traveling a short distance along the column. Moreover, breakthrough curves are concentrations integrated over the column cross-section (the scale of our interest), and they meet the ergodicity assumption embedded in the ADE and CDE. To the contrary, scales of heterogeneity in most groundwater pollution problems evolve as contaminants travel. They are much larger than the scale of our observations and our interests so that the ergodic and the Fickian conditions are difficult. Upscaling the Fick's law for solution dispersion, and deriving universal rules of the dispersion to the field- or basin-scale pollution migrations are merely misuse of the parsimony principle and lead to a fake science ( i.e., the development of theories for predicting processes that can not be observed.) The appropriate principle of parsimony for these situations dictates mapping of large

  20. Parsimonious wave-equation travel-time inversion for refraction waves

    KAUST Repository

    Fu, Lei; Hanafy, Sherif M.; Schuster, Gerard T.

    2017-01-01

    We present a parsimonious wave-equation travel-time inversion technique for refraction waves. A dense virtual refraction dataset can be generated from just two reciprocal shot gathers for the sources at the endpoints of the survey line, with N

  1. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  2. Parsimonious Ways to Use Vision for Navigation

    Directory of Open Access Journals (Sweden)

    Paul Graham

    2012-05-01

    Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.

  3. Parsimonious Structural Equation Models for Repeated Measures Data, with Application to the Study of Consumer Preferences

    Science.gov (United States)

    Elrod, Terry; Haubl, Gerald; Tipps, Steven W.

    2012-01-01

    Recent research reflects a growing awareness of the value of using structural equation models to analyze repeated measures data. However, such data, particularly in the presence of covariates, often lead to models that either fit the data poorly, are exceedingly general and hard to interpret, or are specified in a manner that is highly data…

  4. Parsimonious wave-equation travel-time inversion for refraction waves

    KAUST Repository

    Fu, Lei

    2017-02-14

    We present a parsimonious wave-equation travel-time inversion technique for refraction waves. A dense virtual refraction dataset can be generated from just two reciprocal shot gathers for the sources at the endpoints of the survey line, with N geophones evenly deployed along the line. These two reciprocal shots contain approximately 2N refraction travel times, which can be spawned into O(N2) refraction travel times by an interferometric transformation. Then, these virtual refraction travel times are used with a source wavelet to create N virtual refraction shot gathers, which are the input data for wave-equation travel-time inversion. Numerical results show that the parsimonious wave-equation travel-time tomogram has about the same accuracy as the tomogram computed by standard wave-equation travel-time inversion. The most significant benefit is that a reciprocal survey is far less time consuming than the standard refraction survey where a source is excited at each geophone location.

  5. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  6. The transboundary non-renewable Nubian Aquifer System of Chad, Egypt, Libya and Sudan: classical groundwater questions and parsimonious hydrogeologic analysis and modeling

    Science.gov (United States)

    Voss, Clifford I.; Soliman, Safaa M.

    2014-03-01

    Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world's largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.

  7. Live phylogeny with polytomies: Finding the most compact parsimonious trees.

    Science.gov (United States)

    Papamichail, D; Huang, A; Kennedy, E; Ott, J-L; Miller, A; Papamichail, G

    2017-08-01

    Construction of phylogenetic trees has traditionally focused on binary trees where all species appear on leaves, a problem for which numerous efficient solutions have been developed. Certain application domains though, such as viral evolution and transmission, paleontology, linguistics, and phylogenetic stemmatics, often require phylogeny inference that involves placing input species on ancestral tree nodes (live phylogeny), and polytomies. These requirements, despite their prevalence, lead to computationally harder algorithmic solutions and have been sparsely examined in the literature to date. In this article we prove some unique properties of most parsimonious live phylogenetic trees with polytomies, and their mapping to traditional binary phylogenetic trees. We show that our problem reduces to finding the most compact parsimonious tree for n species, and describe a novel efficient algorithm to find such trees without resorting to exhaustive enumeration of all possible tree topologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    Science.gov (United States)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  9. A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.

    Science.gov (United States)

    Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue

    2014-02-01

    Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. © 2013 Wiley Periodicals, Inc.

  10. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  11. Flood modelling with a distributed event-based parsimonious rainfall-runoff model: case of the karstic Lez river catchment

    Directory of Open Access Journals (Sweden)

    M. Coustau

    2012-04-01

    Full Text Available Rainfall-runoff models are crucial tools for the statistical prediction of flash floods and real-time forecasting. This paper focuses on a karstic basin in the South of France and proposes a distributed parsimonious event-based rainfall-runoff model, coherent with the poor knowledge of both evaporative and underground fluxes. The model combines a SCS runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The efficiency of the model is discussed not only to satisfactorily simulate floods but also to get powerful relationships between the initial condition of the model and various predictors of the initial wetness state of the basin, such as the base flow, the Hu2 index from the Meteo-France SIM model and the piezometric levels of the aquifer. The advantage of using meteorological radar rainfall in flood modelling is also assessed. Model calibration proved to be satisfactory by using an hourly time step with Nash criterion values, ranging between 0.66 and 0.94 for eighteen of the twenty-one selected events. The radar rainfall inputs significantly improved the simulations or the assessment of the initial condition of the model for 5 events at the beginning of autumn, mostly in September–October (mean improvement of Nash is 0.09; correction in the initial condition ranges from −205 to 124 mm, but were less efficient for the events at the end of autumn. In this period, the weak vertical extension of the precipitation system and the low altitude of the 0 °C isotherm could affect the efficiency of radar measurements due to the distance between the basin and the radar (~60 km. The model initial condition S is correlated with the three tested predictors (R2 > 0.6. The interpretation of the model suggests that groundwater does not affect the first peaks of the flood, but can strongly impact subsequent peaks in the case of a multi-storm event. Because this kind of model is based on a limited

  12. Bayesian, maximum parsimony and UPGMA models for inferring the phylogenies of antelopes using mitochondrial markers.

    Science.gov (United States)

    Khan, Haseeb A; Arif, Ibrahim A; Bahkali, Ali H; Al Farhan, Ahmad H; Al Homaidan, Ali A

    2008-10-06

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65%) followed by cyt-b (94.22%) and d-loop (87.29%). There were few transitions (2.35%) and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions) and d-loop (11.57% transitions and 1.14% transversions) while comparing the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx) to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella) with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.

  13. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  14. Parsimonious Hydrologic and Nitrate Response Models For Silver Springs, Florida

    Science.gov (United States)

    Klammler, Harald; Yaquian-Luna, Jose Antonio; Jawitz, James W.; Annable, Michael D.; Hatfield, Kirk

    2014-05-01

    Silver Springs with an approximate discharge of 25 m3/sec is one of Florida's first magnitude springs and among the largest springs worldwide. Its 2500-km2 springshed overlies the mostly unconfined Upper Floridan Aquifer. The aquifer is approximately 100 m thick and predominantly consists of porous, fractured and cavernous limestone, which leads to excellent surface drainage properties (no major stream network other than Silver Springs run) and complex groundwater flow patterns through both rock matrix and fast conduits. Over the past few decades, discharge from Silver Springs has been observed to slowly but continuously decline, while nitrate concentrations in the spring water have enormously increased from a background level of 0.05 mg/l to over 1 mg/l. In combination with concurrent increases in algae growth and turbidity, for example, and despite an otherwise relatively stable water quality, this has given rise to concerns about the ecological equilibrium in and near the spring run as well as possible impacts on tourism. The purpose of the present work is to elaborate parsimonious lumped parameter models that may be used by resource managers for evaluating the springshed's hydrologic and nitrate transport responses. Instead of attempting to explicitly consider the complex hydrogeologic features of the aquifer in a typically numerical and / or stochastic approach, we use a transfer function approach wherein input signals (i.e., time series of groundwater recharge and nitrate loading) are transformed into output signals (i.e., time series of spring discharge and spring nitrate concentrations) by some linear and time-invariant law. The dynamic response types and parameters are inferred from comparing input and output time series in frequency domain (e.g., after Fourier transformation). Results are converted into impulse (or step) response functions, which describe at what time and to what magnitude a unitary change in input manifests at the output. For the

  15. Time-Lapse Monitoring of Subsurface Fluid Flow using Parsimonious Seismic Interferometry

    KAUST Repository

    Hanafy, Sherif; Li, Jing; Schuster, Gerard T.

    2017-01-01

    of parsimonious seismic interferometry with the time-lapse mentoring idea with field examples, where we were able to record 30 different data sets within a 2-hour period. The recorded data are then processed to generate 30 snapshots that shows the spread of water

  16. DupTree: a program for large-scale phylogenetic analyses using gene tree parsimony.

    Science.gov (United States)

    Wehe, André; Bansal, Mukul S; Burleigh, J Gordon; Eulenstein, Oliver

    2008-07-01

    DupTree is a new software program for inferring rooted species trees from collections of gene trees using the gene tree parsimony approach. The program implements a novel algorithm that significantly improves upon the run time of standard search heuristics for gene tree parsimony, and enables the first truly genome-scale phylogenetic analyses. In addition, DupTree allows users to examine alternate rootings and to weight the reconciliation costs for gene trees. DupTree is an open source project written in C++. DupTree for Mac OS X, Windows, and Linux along with a sample dataset and an on-line manual are available at http://genome.cs.iastate.edu/CBL/DupTree

  17. Analyzing Oil Futures with a Dynamic Nelson-Siegel Model

    DEFF Research Database (Denmark)

    Hansen, Niels Strange; Lunde, Asger

    In this paper we are interested in the term structure of futures contracts on oil. The objective is to specify a relatively parsimonious model which explains data well and performs well in a real time out of sample forecasting. The dynamic Nelson-Siegel model is normally used to analyze and forec......In this paper we are interested in the term structure of futures contracts on oil. The objective is to specify a relatively parsimonious model which explains data well and performs well in a real time out of sample forecasting. The dynamic Nelson-Siegel model is normally used to analyze...... and forecast interest rates of different maturities. The structure of oil futures resembles the structure of interest rates and this motivates the use of this model for our purposes. The data set is vast and the dynamic Nelson-Siegel model allows for a significant dimension reduction by introducing three...

  18. On the Accuracy of Ancestral Sequence Reconstruction for Ultrametric Trees with Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2018-04-01

    We examine a mathematical question concerning the reconstruction accuracy of the Fitch algorithm for reconstructing the ancestral sequence of the most recent common ancestor given a phylogenetic tree and sequence data for all taxa under consideration. In particular, for the symmetric four-state substitution model which is also known as Jukes-Cantor model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that for any ultrametric phylogenetic tree and a symmetric model, the Fitch parsimony method using all terminal taxa is more accurate, or at least as accurate, for ancestral state reconstruction than using any particular terminal taxon or any particular pair of taxa. This conjecture had so far only been answered for two-state data by Fischer and Thatte. Here, we focus on answering the biologically more relevant case with four states, which corresponds to ancestral sequence reconstruction from DNA or RNA data.

  19. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  20. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  1. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  2. Parsimonious model for blood glucose level monitoring in type 2 diabetes patients.

    Science.gov (United States)

    Zhao, Fang; Ma, Yan Fen; Wen, Jing Xiao; DU, Yan Fang; Li, Chun Lin; Li, Guang Wei

    2014-07-01

    To establish the parsimonious model for blood glucose monitoring in patients with type 2 diabetes receiving oral hypoglycemic agent treatment. One hundred and fifty-nine adult Chinese type 2 diabetes patients were randomized to receive rapid-acting or sustained-release gliclazide therapy for 12 weeks. Their blood glucose levels were measured at 10 time points in a 24 h period before and after treatment, and the 24 h mean blood glucose levels were measured. Contribution of blood glucose levels to the mean blood glucose level and HbA1c was assessed by multiple regression analysis. The correlation coefficients of blood glucose level measured at 10 time points to the daily MBG were 0.58-0.74 and 0.59-0.79, respectively, before and after treatment (Pblood glucose levels measured at 6 of the 10 time points could explain 95% and 97% of the changes in MBG before and after treatment. The three blood glucose levels, which were measured at fasting, 2 h after breakfast and before dinner, of the 10 time points could explain 84% and 86% of the changes in MBG before and after treatment, but could only explain 36% and 26% of the changes in HbA1c before and after treatment, and they had a poorer correlation with the HbA1c than with the 24 h MBG. The blood glucose levels measured at fasting, 2 h after breakfast and before dinner truly reflected the change 24 h blood glucose level, suggesting that they are appropriate for the self-monitoring of blood glucose levels in diabetes patients receiving oral anti-diabetes therapy. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  3. Balancing practicality and hydrologic realism: a parsimonious approach for simulating rapid groundwater recharge via unsaturated-zone preferential flow

    Science.gov (United States)

    Mirus, Benjamin B.; Nimmo, J.R.

    2013-01-01

    The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.

  4. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  5. Where and why hyporheic exchange is important: Inferences from a parsimonious, physically-based river network model

    Science.gov (United States)

    Gomez-Velez, J. D.; Harvey, J. W.

    2014-12-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.

  6. Consequence Valuing as Operation and Process: A Parsimonious Analysis of Motivation

    Science.gov (United States)

    Whelan, Robert; Barnes-Holmes, Dermot

    2010-01-01

    The concept of the motivating operation (MO) has been subject to 3 criticisms: (a) the terms and concepts employed do not always overlap with traditional behavior-analytic verbal practices; (b) the dual nature of the MO is unclear; and (c) there is a lack of adequate contact with empirical data. We offer a more parsimonious approach to motivation,…

  7. A Parsimonious Model of the Rabbit Action Potential Elucidates the Minimal Physiological Requirements for Alternans and Spiral Wave Breakup.

    Science.gov (United States)

    Gray, Richard A; Pathmanathan, Pras

    2016-10-01

    Elucidating the underlying mechanisms of fatal cardiac arrhythmias requires a tight integration of electrophysiological experiments, models, and theory. Existing models of transmembrane action potential (AP) are complex (resulting in over parameterization) and varied (leading to dissimilar predictions). Thus, simpler models are needed to elucidate the "minimal physiological requirements" to reproduce significant observable phenomena using as few parameters as possible. Moreover, models have been derived from experimental studies from a variety of species under a range of environmental conditions (for example, all existing rabbit AP models incorporate a formulation of the rapid sodium current, INa, based on 30 year old data from chick embryo cell aggregates). Here we develop a simple "parsimonious" rabbit AP model that is mathematically identifiable (i.e., not over parameterized) by combining a novel Hodgkin-Huxley formulation of INa with a phenomenological model of repolarization similar to the voltage dependent, time-independent rectifying outward potassium current (IK). The model was calibrated using the following experimental data sets measured from the same species (rabbit) under physiological conditions: dynamic current-voltage (I-V) relationships during the AP upstroke; rapid recovery of AP excitability during the relative refractory period; and steady-state INa inactivation via voltage clamp. Simulations reproduced several important "emergent" phenomena including cellular alternans at rates > 250 bpm as observed in rabbit myocytes, reentrant spiral waves as observed on the surface of the rabbit heart, and spiral wave breakup. Model variants were studied which elucidated the minimal requirements for alternans and spiral wave break up, namely the kinetics of INa inactivation and the non-linear rectification of IK.The simplicity of the model, and the fact that its parameters have physiological meaning, make it ideal for engendering generalizable mechanistic

  8. Vector Autoregressions with Parsimoniously Time Varying Parameters and an Application to Monetary Policy

    DEFF Research Database (Denmark)

    Callot, Laurent; Kristensen, Johannes Tang

    the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule.We document substantial changes in the policy response of the Fed in the 1970s and 1980s, and since 2007, but also document the stability of this response...

  9. Hypothesis of the Disappearance of the Limits of Improvidence and Parsimony in the Function of Consumption in an Islamic Economy

    Directory of Open Access Journals (Sweden)

    محمد أحمد حسن الأفندي

    2018-04-01

    Full Text Available There is a rich literature about the analysis of consumption behavior from the perspective of Islamic economy. The focus of such literature has been on the incorporation of the effect of moral values on individuals’ consumption behavior. However, studies on consumption did not pay enough heed to the analysis of the ultimate effect of faith values on the track of consumption behavior over time. This desired track of consumption involves showing certain hypotheses and probabilities. This study suggests a normative statement which includes the gradual disappearance of parsimony and improvidence over time. This disappearance would correct the deviation of actual consumption of society members from the desired moderate consumption level, so as to make households’ consumption behavior at the desired level which is consistent with Islamic Sharia. The study emphasizes the need to develop analysis and research in two integrated directions: i conducting more empirical studies to examine the consistency of the normative statement with evidence from real situations, and ii conducting more analysis to develop a specific measure for the desired consumption levels as well as the limits of parsimony and improvidence. Keywords: Disappearance of improvidence and parsimony limits, Desired moderate consumption level, Actual consumption, Improvidence and parsimony consumption levels, Track of households’ consumption behavior.

  10. On the Quirks of Maximum Parsimony and Likelihood on Phylogenetic Networks

    OpenAIRE

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2015-01-01

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogene...

  11. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments

    Directory of Open Access Journals (Sweden)

    H. Roux

    2011-09-01

    Full Text Available A spatially distributed hydrological model, dedicated to flood simulation, is developed on the basis of physical process representation (infiltration, overland flow, channel routing. Estimation of model parameters requires data concerning topography, soil properties, vegetation and land use. Four parameters are calibrated for the entire catchment using one flood event. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations. Results of this sensitivity analysis with a criterion based on the Nash efficiency coefficient and the error of peak time and runoff are used to calibrate the model. This procedure is tested on the Gardon d'Anduze catchment, located in the Mediterranean zone of southern France. A first validation is conducted using three flood events with different hydrometeorological characteristics. This sensitivity analysis along with validation tests illustrates the predictive capability of the model and points out the possible improvements on the model's structure and parameterization for flash flood forecasting, especially in ungauged basins. Concerning the model structure, results show that water transfer through the subsurface zone also contributes to the hydrograph response to an extreme event, especially during the recession period. Maps of soil saturation emphasize the impact of rainfall and soil properties variability on these dynamics. Adding a subsurface flow component in the simulation also greatly impacts the spatial distribution of soil saturation and shows the importance of the drainage network. Measures of such distributed variables would help discriminating between different possible model structures.

  12. A mixed integer linear programming model to reconstruct phylogenies from single nucleotide polymorphism haplotypes under the maximum parsimony criterion

    Science.gov (United States)

    2013-01-01

    Background Phylogeny estimation from aligned haplotype sequences has attracted more and more attention in the recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from medical research, to drug discovery, to epidemiology, to population dynamics. The literature on molecular phylogenetics proposes a number of criteria for selecting a phylogeny from among plausible alternatives. Usually, such criteria can be expressed by means of objective functions, and the phylogenies that optimize them are referred to as optimal. One of the most important estimation criteria is the parsimony which states that the optimal phylogeny T∗for a set H of n haplotype sequences over a common set of variable loci is the one that satisfies the following requirements: (i) it has the shortest length and (ii) it is such that, for each pair of distinct haplotypes hi,hj∈H, the sum of the edge weights belonging to the path from hi to hj in T∗ is not smaller than the observed number of changes between hi and hj. Finding the most parsimonious phylogeny for H involves solving an optimization problem, called the Most Parsimonious Phylogeny Estimation Problem (MPPEP), which is NP-hard in many of its versions. Results In this article we investigate a recent version of the MPPEP that arises when input data consist of single nucleotide polymorphism haplotypes extracted from a population of individuals on a common genomic region. Specifically, we explore the prospects for improving on the implicit enumeration strategy of implicit enumeration strategy used in previous work using a novel problem formulation and a series of strengthening valid inequalities and preliminary symmetry breaking constraints to more precisely bound the solution space and accelerate implicit enumeration of possible optimal phylogenies. We present the basic formulation and then introduce a series of provable valid constraints to reduce the solution space. We then prove

  13. Assessing Credit with Equity : A CEV Model with Jump to Default

    NARCIS (Netherlands)

    Campi, L.; Polbennikov, S.Y.; Sbuelz, A.

    2005-01-01

    Unlike in structural and reduced-form models, we use equity as a liquid and observable primitive to analytically value corporate bonds and credit default swaps.Restrictive assumptions on the .rm.s capital structure are avoided.Default is parsimoniously represented by equity value hitting the zero

  14. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  15. Things fall apart: biological species form unconnected parsimony networks.

    Science.gov (United States)

    Hart, Michael W; Sunday, Jennifer

    2007-10-22

    The generality of operational species definitions is limited by problematic definitions of between-species divergence. A recent phylogenetic species concept based on a simple objective measure of statistically significant genetic differentiation uses between-species application of statistical parsimony networks that are typically used for population genetic analysis within species. Here we review recent phylogeographic studies and reanalyse several mtDNA barcoding studies using this method. We found that (i) alignments of DNA sequences typically fall apart into a separate subnetwork for each Linnean species (but with a higher rate of true positives for mtDNA data) and (ii) DNA sequences from single species typically stick together in a single haplotype network. Departures from these patterns are usually consistent with hybridization or cryptic species diversity.

  16. Catchment legacies and time lags: a parsimonious watershed model to predict the effects of legacy storage on nitrogen export.

    Directory of Open Access Journals (Sweden)

    Kimberly J Van Meter

    Full Text Available Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy and groundwater travel time distributions (hydrologic legacy. The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.

  17. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  18. A large version of the small parsimony problem

    DEFF Research Database (Denmark)

    Fredslund, Jakob; Hein, Jotun; Scharling, Tejs

    2003-01-01

    the most parsimonious assignment of nucleotides. The gaps of the alignment are represented in a so-called gap graph, and through theoretically sound preprocessing the graph is reduced to pave the way for a running time which in all but the most pathological examples is far better than the exponential worst......Given a multiple alignment over $k$ sequences, an evolutionary tree relating the sequences, and a subadditive gap penalty function (e.g. an affine function), we reconstruct the internal nodes of the tree optimally: we find the optimal explanation in terms of indels of the observed gaps and find...... case time. E.g. for a tree with nine leaves and a random alignment of length 10.000 with 60% gaps, the running time is on average around 45 seconds. For a real alignment of length 9868 of nine HIV-1 sequences, the running time is less than one second....

  19. The application of a social cognition model in explaining fruit intake in Austrian, Norwegian and Spanish schoolchildren using structural equation modelling

    Directory of Open Access Journals (Sweden)

    Pérez-Rodrigo Carmen

    2007-11-01

    Full Text Available Abstract Background The aim of this paper was to test the goodness of fit of the Attitude – Social influence – self-Efficacy (ASE model in explaining schoolchildren's intentions to eat fruit and their actual fruit intake in Austria, Norway and Spain; to assess how well the model could explain the observed variance in intention to eat fruit and in reported fruit intake and to investigate whether the same model would fit data from all three countries. Methods Samples consisted of schoolchildren from three of the countries participating in the cross-sectional part of the Pro Children project. Sample size varied from 991 in Austria to 1297 in Spain. Mean age ranged from 11.3 to 11.4 years. The initial model was designed using items and constructs from the Pro Children study. Factor analysis was conducted to test the structure of the measures in the model. The Norwegian sample was used to test the latent variable structure, to make a preliminary assessment of model fit, and to modify the model to increase goodness of fit with the data. The original and modified models were then applied to the Austrian and Spanish samples. All model analyses were carried out using structural equation modelling techniques. Results The ASE-model fitted the Norwegian and Spanish data well. For Austria, a slightly more complex model was needed. For this reason multi-sample analysis to test equality in factor structure and loadings across countries could not be used. The models explained between 51% and 69% of the variance in intention to eat fruit, and 27% to 38% of the variance in reported fruit intake. Conclusion Structural equation modelling showed that a rather parsimonious model was useful in explaining the variation in fruit intake of 11-year-old schoolchildren in Norway and Spain. For Austria, more modifications were needed to fit the data.

  20. A Parsimonious Instrument for Predicting Students' Intent to Pursue a Sales Career: Scale Development and Validation

    Science.gov (United States)

    Peltier, James W.; Cummins, Shannon; Pomirleanu, Nadia; Cross, James; Simon, Rob

    2014-01-01

    Students' desire and intention to pursue a career in sales continue to lag behind industry demand for sales professionals. This article develops and validates a reliable and parsimonious scale for measuring and predicting student intention to pursue a selling career. The instrument advances previous scales in three ways. The instrument is…

  1. A structural model of the dimensions of teacher stress.

    Science.gov (United States)

    Boyle, G J; Borg, M G; Falzon, J M; Baglioni, A J

    1995-03-01

    A comprehensive survey of teacher stress, job satisfaction and career commitment among 710 full-time primary school teachers was undertaken by Borg, Riding & Falzon (1991) in the Mediterranean islands of Malta and Gozo. A principal components analysis of a 20-item sources of teacher stress inventory had suggested four distinct dimensions which were labelled: Pupil Misbehaviour, Time/Resource Difficulties, Professional Recognition Needs, and Poor Relationships, respectively. To check on the validity of the Borg et al. factor solution, the group of 710 teachers was randomly split into two separate samples. Exploratory factor analysis was carried out on the data from Sample 1 (N = 335), while Sample 2 (N = 375) provided the cross-validational data for a LISREL confirmatory factor analysis. Results supported the proposed dimensionality of the sources of teacher stress (measurement model), along with evidence of an additional teacher stress factor (Workload). Consequently, structural modelling of the 'causal relationships' between the various latent variables and self-reported stress was undertaken on the combined samples (N = 710). Although both non-recursive and recursive models incorporating Poor Colleague Relations as a mediating variable were tested for their goodness-of-fit, a simple regression model provided the most parsimonious fit to the empirical data, wherein Workload and Student Misbehaviour accounted for most of the variance in predicting teaching stress.

  2. Continuous-Time Semi-Markov Models in Health Economic Decision Making : An Illustrative Example in Heart Failure Disease Management

    NARCIS (Netherlands)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease

  3. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    Science.gov (United States)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  4. Assessment of Genetic Heterogeneity in Structured Plant Populations Using Multivariate Whole-Genome Regression Models.

    Science.gov (United States)

    Lehermeier, Christina; Schön, Chris-Carolin; de Los Campos, Gustavo

    2015-09-01

    Plant breeding populations exhibit varying levels of structure and admixture; these features are likely to induce heterogeneity of marker effects across subpopulations. Traditionally, structure has been dealt with as a potential confounder, and various methods exist to "correct" for population stratification. However, these methods induce a mean correction that does not account for heterogeneity of marker effects. The animal breeding literature offers a few recent studies that consider modeling genetic heterogeneity in multibreed data, using multivariate models. However, these methods have received little attention in plant breeding where population structure can have different forms. In this article we address the problem of analyzing data from heterogeneous plant breeding populations, using three approaches: (a) a model that ignores population structure [A-genome-based best linear unbiased prediction (A-GBLUP)], (b) a stratified (i.e., within-group) analysis (W-GBLUP), and (c) a multivariate approach that uses multigroup data and accounts for heterogeneity (MG-GBLUP). The performance of the three models was assessed on three different data sets: a diversity panel of rice (Oryza sativa), a maize (Zea mays L.) half-sib panel, and a wheat (Triticum aestivum L.) data set that originated from plant breeding programs. The estimated genomic correlations between subpopulations varied from null to moderate, depending on the genetic distance between subpopulations and traits. Our assessment of prediction accuracy features cases where ignoring population structure leads to a parsimonious more powerful model as well as others where the multivariate and stratified approaches have higher predictive power. In general, the multivariate approach appeared slightly more robust than either the A- or the W-GBLUP. Copyright © 2015 by the Genetics Society of America.

  5. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  6. The scenario on the origin of translation in the RNA world: in principle of replication parsimony

    Directory of Open Access Journals (Sweden)

    Ma Wentao

    2010-11-01

    Full Text Available Abstract Background It is now believed that in the origin of life, proteins should have been "invented" in an RNA world. However, due to the complexity of a possible RNA-based proto-translation system, this evolving process seems quite complicated and the associated scenario remains very blurry. Considering that RNA can bind amino acids with specificity, it has been reasonably supposed that initial peptides might have been synthesized on "RNA templates" containing multiple amino acid binding sites. This "Direct RNA Template (DRT" mechanism is attractive because it should be the simplest mechanism for RNA to synthesize peptides, thus very likely to have been adopted initially in the RNA world. Then, how this mechanism could develop into a proto-translation system mechanism is an interesting problem. Presentation of the hypothesis Here an explanation to this problem is shown considering the principle of "replication parsimony" --- genetic information tends to be utilized in a parsimonious way under selection pressure, due to its replication cost (e.g., in the RNA world, nucleotides and ribozymes for RNA replication. Because a DRT would be quite long even for a short peptide, its replication cost would be great. Thus the diversity and the length of functional peptides synthesized by the DRT mechanism would be seriously limited. Adaptors (proto-tRNAs would arise to allow a DRT's complementary strand (called "C-DRT" here to direct the synthesis of the same peptide synthesized by the DRT itself. Because the C-DRT is a necessary part in the DRT's replication, fewer turns of the DRT's replication would be needed to synthesize definite copies of the functional peptide, thus saving the replication cost. Acting through adaptors, C-DRTs could transform into much shorter templates (called "proto-mRNAs" here and substitute the role of DRTs, thus significantly saving the replication cost. A proto-rRNA corresponding to the small subunit rRNA would then emerge

  7. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data : An illustration with health-related quality of life data from thirteen measurement occasions

    NARCIS (Netherlands)

    Verdam, M.G.E.; Oort, F.J.

    2014-01-01

    Highlights: - Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. - A method for the investigation of measurement bias with Kronecker product restricted models. - Application of these methods to health-related quality of life data

  8. A parsimonious model for the proportional control valve

    OpenAIRE

    Elmer, KF; Gentle, CR

    2001-01-01

    A generic non-linear dynamic model of a direct-acting electrohydraulic proportional solenoid valve is presented. The valve consists of two subsystems-s-a spool assembly and one or two unidirectional proportional solenoids. These two subsystems are modelled separately. The solenoid is modelled as a non-linear resistor-inductor combination, with inductance parameters that change with current. An innovative modelling method has been used to represent these components. The spool assembly is model...

  9. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions

    NARCIS (Netherlands)

    Verdam, Mathilde G. E.; Oort, Frans J.

    2014-01-01

    Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone

  10. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  11. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  12. Prosodic structure as a parallel to musical structure

    Directory of Open Access Journals (Sweden)

    Christopher Cullen Heffner

    2015-12-01

    Full Text Available What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

  13. Hide and vanish: data sets where the most parsimonious tree is known but hard to find, and their implications for tree search methods.

    Science.gov (United States)

    Goloboff, Pablo A

    2014-10-01

    Three different types of data sets, for which the uniquely most parsimonious tree can be known exactly but is hard to find with heuristic tree search methods, are studied. Tree searches are complicated more by the shape of the tree landscape (i.e. the distribution of homoplasy on different trees) than by the sheer abundance of homoplasy or character conflict. Data sets of Type 1 are those constructed by Radel et al. (2013). Data sets of Type 2 present a very rugged landscape, with narrow peaks and valleys, but relatively low amounts of homoplasy. For such a tree landscape, subjecting the trees to TBR and saving suboptimal trees produces much better results when the sequence of clipping for the tree branches is randomized instead of fixed. An unexpected finding for data sets of Types 1 and 2 is that starting a search from a random tree instead of a random addition sequence Wagner tree may increase the probability that the search finds the most parsimonious tree; a small artificial example where these probabilities can be calculated exactly is presented. Data sets of Type 3, the most difficult data sets studied here, comprise only congruent characters, and a single island with only one most parsimonious tree. Even if there is a single island, missing entries create a very flat landscape which is difficult to traverse with tree search algorithms because the number of equally parsimonious trees that need to be saved and swapped to effectively move around the plateaus is too large. Minor modifications of the parameters of tree drifting, ratchet, and sectorial searches allow travelling around these plateaus much more efficiently than saving and swapping large numbers of equally parsimonious trees with TBR. For these data sets, two new related criteria for selecting taxon addition sequences in Wagner trees (the "selected" and "informative" addition sequences) produce much better results than the standard random or closest addition sequences. These new methods for Wagner

  14. Carbon-nitrogen-water interactions: is model parsimony fruitful?

    Science.gov (United States)

    Puertes, Cristina; González-Sanchis, María; Lidón, Antonio; Bautista, Inmaculada; Lull, Cristina; Francés, Félix

    2017-04-01

    It is well known that carbon and nitrogen cycles are highly intertwined and both should be explained through the water balance. In fact, in water-controlled ecosystems nutrient deficit is related to this water scarcity. For this reason, the present study compares the capability of three models in reproducing the interaction between the carbon and nitrogen cycles and the water cycle. The models are BIOME-BGCMuSo, LEACHM and a simple carbon-nitrogen model coupled to the hydrological model TETIS. Biome-BGCMuSo and LEACHM are two widely used models that reproduce the carbon and nitrogen cycles adequately. However, their main limitation is that these models are quite complex and can be too detailed for watershed studies. On the contrary, the TETIS nutrient sub-model is a conceptual model with a vertical tank distribution over the active soil depth, dividing it in two layers. Only the input of the added litter and the losses due to soil respiration, denitrification, leaching and plant uptake are considered as external fluxes. Other fluxes have been neglected. The three models have been implemented in an experimental plot of a semi-arid catchment (La Hunde, East of Spain), mostly covered by holm oak (Quercus ilex). Plant transpiration, soil moisture and runoff have been monitored daily during nearly two years (26/10/2012 to 30/09/2014). For the same period, soil samples were collected every two months and taken to the lab in order to obtain the concentrations of dissolved organic carbon, microbial biomass carbon, ammonium and nitrate. In addition, between field trips soil samples were placed in PVC tubes with resin traps and were left incubating (in situ buried cores). Thus, mineralization and nitrification accumulated fluxes for two months, were obtained. The ammonium and nitrate leaching accumulated for two months were measured using ion-exchange resin cores. Soil respiration was also measured every field trip. Finally, water samples deriving from runoff, were collected

  15. Systematics and morphological evolution within the moss family Bryaceae: a comparison between parsimony and Bayesian methods for reconstruction of ancestral character states.

    Science.gov (United States)

    Pedersen, Niklas; Holyoak, David T; Newton, Angela E

    2007-06-01

    The Bryaceae are a large cosmopolitan moss family including genera of significant morphological and taxonomic complexity. Phylogenetic relationships within the Bryaceae were reconstructed based on DNA sequence data from all three genomic compartments. In addition, maximum parsimony and Bayesian inference were employed to reconstruct ancestral character states of 38 morphological plus four habitat characters and eight insertion/deletion events. The recovered phylogenetic patterns are generally in accord with previous phylogenies based on chloroplast DNA sequence data and three major clades are identified. The first clade comprises Bryum bornholmense, B. rubens, B. caespiticium, and Plagiobryum. This corroborates the hypothesis suggested by previous studies that several Bryum species are more closely related to Plagiobryum than to the core Bryum species. The second clade includes Acidodontium, Anomobryum, and Haplodontium, while the third clade contains the core Bryum species plus Imbribryum. Within the latter clade, B. subapiculatum and B. tenuisetum form the sister clade to Imbribryum. Reconstructions of ancestral character states under maximum parsimony and Bayesian inference suggest fourteen morphological synapomorphies for the ingroup and synapomorphies are detected for most clades within the ingroup. Maximum parsimony and Bayesian reconstructions of ancestral character states are mostly congruent although Bayesian inference shows that the posterior probability of ancestral character states may decrease dramatically when node support is taken into account. Bayesian inference also indicates that reconstructions may be ambiguous at internal nodes for highly polymorphic characters.

  16. Prediction of traffic-related nitrogen oxides concentrations using Structural Time-Series models

    Science.gov (United States)

    Lawson, Anneka Ruth; Ghosh, Bidisha; Broderick, Brian

    2011-09-01

    Ambient air quality monitoring, modeling and compliance to the standards set by European Union (EU) directives and World Health Organization (WHO) guidelines are required to ensure the protection of human and environmental health. Congested urban areas are most susceptible to traffic-related air pollution which is the most problematic source of air pollution in Ireland. Long-term continuous real-time monitoring of ambient air quality at such urban centers is essential but often not realistic due to financial and operational constraints. Hence, the development of a resource-conservative ambient air quality monitoring technique is essential to ensure compliance with the threshold values set by the standards. As an intelligent and advanced statistical methodology, a Structural Time Series (STS) based approach has been introduced in this paper to develop a parsimonious and computationally simple air quality model. In STS methodology, the different components of a time-series dataset such as the trend, seasonal, cyclical and calendar variations can be modeled separately. To test the effectiveness of the proposed modeling strategy, average hourly concentrations of nitrogen dioxide and nitrogen oxides from a congested urban arterial in Dublin city center were modeled using STS methodology. The prediction error estimates from the developed air quality model indicate that the STS model can be a useful tool in predicting nitrogen dioxide and nitrogen oxides concentrations in urban areas and will be particularly useful in situations where the information on external variables such as meteorology or traffic volume is not available.

  17. Time-Lapse Monitoring of Subsurface Fluid Flow using Parsimonious Seismic Interferometry

    KAUST Repository

    Hanafy, Sherif

    2017-04-21

    A typical small-scale seismic survey (such as 240 shot gathers) takes at least 16 working hours to be completed, which is a major obstacle in case of time-lapse monitoring experiments. This is especially true if the subject that needs to be monitored is rapidly changing. In this work, we will discuss how to decrease the recording time from 16 working hours to less than one hour of recording. Here, the virtual data has the same accuracy as the conventional data. We validate the efficacy of parsimonious seismic interferometry with the time-lapse mentoring idea with field examples, where we were able to record 30 different data sets within a 2-hour period. The recorded data are then processed to generate 30 snapshots that shows the spread of water from the ground surface down to a few meters.

  18. Structural Model of psychological risk and protective factors affecting on quality of life in patients with coronary heart disease: A psychocardiology model

    Directory of Open Access Journals (Sweden)

    Zohreh Khayyam Nekouei

    2014-01-01

    Full Text Available Background: Conducted researches show that psychological factors may have a very important role in the etiology, continuity and consequences of coronary heart diseases. This study has drawn the psychological risk and protective factors and their effects in patients with coronary heart diseases (CHD in a structural model. It aims to determine the structural relations between psychological risk and protective factors with quality of life in patients with coronary heart disease. Materials and Methods: The present cross-sectional and correlational studies were conducted using structural equation modeling. The study sample included 398 patients of coronary heart disease in the university referral Hospital, as well as other city health care centers in Isfahan city. They were selected based on random sampling method. Then, in case, they were executed the following questionnaires: Coping with stressful situations (CISS- 21, life orientation (LOT-10, general self-efficacy (GSE-10, depression, anxiety and stress (DASS-21, perceived stress (PSS-14, multidimensional social support (MSPSS-12, alexithymia (TAS-20, spiritual intelligence (SQ-23 and quality of life (WHOQOL-26. Results: The results showed that protective and risk factors could affect the quality of life in patients with CHD with factor loadings of 0.35 and −0.60, respectively. Moreover, based on the values of the framework of the model such as relative chi-square (CMIN/DF = 3.25, the Comparative Fit Index (CFI = 0.93, the Parsimony Comparative Fit Index (PCFI = 0.68, the Root Mean Square Error of Approximation (RMSEA = 0.07 and details of the model (significance of the relationships it has been confirmed that the psychocardiological structural model of the study is the good fitting model. Conclusion: This study was among the first to research the different psychological risk and protective factors of coronary heart diseases in the form of a structural model. The results of this study have

  19. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    Science.gov (United States)

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  20. Composite scores in comparative effectiveness research: counterbalancing parsimony and dimensionality in patient-reported outcomes.

    Science.gov (United States)

    Schwartz, Carolyn E; Patrick, Donald L

    2014-07-01

    When planning a comparative effectiveness study comparing disease-modifying treatments, competing demands influence choice of outcomes. Current practice emphasizes parsimony, although understanding multidimensional treatment impact can help to personalize medical decision-making. We discuss both sides of this 'tug of war'. We discuss the assumptions, advantages and drawbacks of composite scores and multidimensional outcomes. We describe possible solutions to the multiple comparison problem, including conceptual hierarchy distinctions, statistical approaches, 'real-world' benchmarks of effectiveness and subgroup analysis. We conclude that comparative effectiveness research should consider multiple outcome dimensions and compare different approaches that fit the individual context of study objectives.

  1. Modeling survival: application of the Andersen-Gill model to Yellowstone grizzly bears

    Science.gov (United States)

    Johnson, Christopher J.; Boyce, Mark S.; Schwartz, Charles C.; Haroldson, Mark A.

    2004-01-01

     Wildlife ecologists often use the Kaplan-Meier procedure or Cox proportional hazards model to estimate survival rates, distributions, and magnitude of risk factors. The Andersen-Gill formulation (A-G) of the Cox proportional hazards model has seen limited application to mark-resight data but has a number of advantages, including the ability to accommodate left-censored data, time-varying covariates, multiple events, and discontinuous intervals of risks. We introduce the A-G model including structure of data, interpretation of results, and assessment of assumptions. We then apply the model to 22 years of radiotelemetry data for grizzly bears (Ursus arctos) of the Greater Yellowstone Grizzly Bear Recovery Zone in Montana, Idaho, and Wyoming, USA. We used Akaike's Information Criterion (AICc) and multi-model inference to assess a number of potentially useful predictive models relative to explanatory covariates for demography, human disturbance, and habitat. Using the most parsimonious models, we generated risk ratios, hypothetical survival curves, and a map of the spatial distribution of high-risk areas across the recovery zone. Our results were in agreement with past studies of mortality factors for Yellowstone grizzly bears. Holding other covariates constant, mortality was highest for bears that were subjected to repeated management actions and inhabited areas with high road densities outside Yellowstone National Park. Hazard models developed with covariates descriptive of foraging habitats were not the most parsimonious, but they suggested that high-elevation areas offered lower risks of mortality when compared to agricultural areas.

  2. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  3. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.

  4. Innovative Bayesian and Parsimony Phylogeny of Dung Beetles (Coleoptera, Scarabaeidae, Scarabaeinae) Enhanced by Ontology-Based Partitioning of Morphological Characters

    Science.gov (United States)

    Tarasov, Sergei; Génier, François

    2015-01-01

    Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a

  5. Dependent defaults and losses with factor copula models

    Directory of Open Access Journals (Sweden)

    Ackerer Damien

    2017-12-01

    Full Text Available We present a class of flexible and tractable static factor models for the term structure of joint default probabilities, the factor copula models. These high-dimensional models remain parsimonious with paircopula constructions, and nest many standard models as special cases. The loss distribution of a portfolio of contingent claims can be exactly and efficiently computed when individual losses are discretely supported on a finite grid. Numerical examples study the key features affecting the loss distribution and multi-name credit derivatives prices. An empirical exercise illustrates the flexibility of our approach by fitting credit index tranche prices.

  6. Job durations and the job search model : a two-country, multi-sample analysis

    OpenAIRE

    Bagger, Jesper; Henningsen, Morten

    2008-01-01

    Abstract: This paper assesses whether a parsimonious partial equilibrium job search model with on-the-job search can reproduce observed job durations and transitions to other jobs and to nonemployment. We allow for unobserved heterogeneity across individuals in key structural parameters. Observed heterogeneity and life cycle effects are accounted for by estimating separate models for flow samples of labor market entrants and stock samples of “mature” workers with 10-11 years of...

  7. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  8. Testing Models of Psychopathology in Preschool-aged Children Using a Structured Interview-based Assessment

    Science.gov (United States)

    Dougherty, Lea R.; Bufferd, Sara J.; Carlson, Gabrielle A.; Klein, Daniel N.

    2014-01-01

    A number of studies have found that broadband internalizing and externalizing factors provide a parsimonious framework for understanding the structure of psychopathology across childhood, adolescence, and adulthood. However, few of these studies have examined psychopathology in young children, and several recent studies have found support for alternative models, including a bi-factor model with common and specific factors. The present study used parents’ (typically mothers’) reports on a diagnostic interview in a community sample of 3-year old children (n=541; 53.9 % male) to compare the internalizing-externalizing latent factor model with a bi-factor model. The bi-factor model provided a better fit to the data. To test the concurrent validity of this solution, we examined associations between this model and paternal reports and laboratory observations of child temperament. The internalizing factor was associated with low levels of surgency and high levels of fear; the externalizing factor was associated with high levels of surgency and disinhibition and low levels of effortful control; and the common factor was associated with high levels of surgency and negative affect and low levels of effortful control. These results suggest that psychopathology in preschool-aged children may be explained by a single, common factor influencing nearly all disorders and unique internalizing and externalizing factors. These findings indicate that shared variance across internalizing and externalizing domains is substantial and are consistent with recent suggestions that emotion regulation difficulties may be a common vulnerability for a wide array of psychopathology. PMID:24652485

  9. Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test

    Science.gov (United States)

    Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.

    We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.

  10. Inferring phylogenetic networks by the maximum parsimony criterion: a case study.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-01

    Horizontal gene transfer (HGT) may result in genes whose evolutionary histories disagree with each other, as well as with the species tree. In this case, reconciling the species and gene trees results in a network of relationships, known as the "phylogenetic network" of the set of species. A phylogenetic network that incorporates HGT consists of an underlying species tree that captures vertical inheritance and a set of edges which model the "horizontal" transfer of genetic material. In a series of papers, Nakhleh and colleagues have recently formulated a maximum parsimony (MP) criterion for phylogenetic networks, provided an array of computationally efficient algorithms and heuristics for computing it, and demonstrated its plausibility on simulated data. In this article, we study the performance and robustness of this criterion on biological data. Our findings indicate that MP is very promising when its application is extended to the domain of phylogenetic network reconstruction and HGT detection. In all cases we investigated, the MP criterion detected the correct number of HGT events required to map the evolutionary history of a gene data set onto the species phylogeny. Furthermore, our results indicate that the criterion is robust with respect to both incomplete taxon sampling and the use of different site substitution matrices. Finally, our results show that the MP criterion is very promising in detecting HGT in chimeric genes, whose evolutionary histories are a mix of vertical and horizontal evolution. Besides the performance analysis of MP, our findings offer new insights into the evolution of 4 biological data sets and new possible explanations of HGT scenarios in their evolutionary history.

  11. Patterns and effects of GC3 heterogeneity and parsimony informative sites on the phylogenetic tree of genes.

    Science.gov (United States)

    Ma, Shuai; Wu, Qi; Hu, Yibo; Wei, Fuwen

    2018-05-20

    The explosive growth in genomic data has provided novel insights into the conflicting signals hidden in phylogenetic trees. Although some studies have explored the effects of the GC content and parsimony informative sites (PIS) on the phylogenetic tree, the effect of the heterogeneity of the GC content at the first/second/third codon position on parsimony informative sites (GC1/2/3 PIS ) among different species and the effect of PIS on phylogenetic tree construction remain largely unexplored. Here, we used two different mammal genomic datasets to explore the patterns of GC1/2/3 PIS heterogeneity and the effect of PIS on the phylogenetic tree of genes: (i) all GC1/2/3 PIS have obvious heterogeneity between different mammals, and the levels of heterogeneity are GC3 PIS  > GC2 PIS  > GC1 PIS ; (ii) the number of PIS is positively correlated with the metrics of "good" gene tree topologies, and excluding the third codon position (C3) decreases the quality of gene trees by removing too many PIS. These results provide novel insights into the heterogeneity pattern of GC1/2/3 PIS in mammals and the relationship between GC3/PIS and gene trees. Additionally, it is necessary to carefully consider whether to exclude C3 to improve the quality of gene trees, especially in the super-tree method. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Reconstruction of ancestral RNA sequences under multiple structural constraints.

    Science.gov (United States)

    Tremblay-Savard, Olivier; Reinharz, Vladimir; Waldispühl, Jérôme

    2016-11-11

    Secondary structures form the scaffold of multiple sequence alignment of non-coding RNA (ncRNA) families. An accurate reconstruction of ancestral ncRNAs must use this structural signal. However, the inference of ancestors of a single ncRNA family with a single consensus structure may bias the results towards sequences with high affinity to this structure, which are far from the true ancestors. In this paper, we introduce achARNement, a maximum parsimony approach that, given two alignments of homologous ncRNA families with consensus secondary structures and a phylogenetic tree, simultaneously calculates ancestral RNA sequences for these two families. We test our methodology on simulated data sets, and show that achARNement outperforms classical maximum parsimony approaches in terms of accuracy, but also reduces by several orders of magnitude the number of candidate sequences. To conclude this study, we apply our algorithms on the Glm clan and the FinP-traJ clan from the Rfam database. Our results show that our methods reconstruct small sets of high-quality candidate ancestors with better agreement to the two target structures than with classical approaches. Our program is freely available at: http://csb.cs.mcgill.ca/acharnement .

  13. More quality measures versus measuring what matters: a call for balance and parsimony.

    Science.gov (United States)

    Meyer, Gregg S; Nelson, Eugene C; Pryor, David B; James, Brent; Swensen, Stephen J; Kaplan, Gary S; Weissberg, Jed I; Bisognano, Maureen; Yates, Gary R; Hunt, Gordon C

    2012-11-01

    External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Although benefits have accrued from the growth in quality measurement, the recent explosion in the number of measures threatens to shift resources from improving quality to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care, and lower per capita costs). Here we propose a policy that quality measurement should be: balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs.

  14. Parsimonious data: How a single Facebook like predicts voting behavior in multiparty systems.

    Directory of Open Access Journals (Sweden)

    Jakob Bæk Kristensen

    Full Text Available This study shows how liking politicians' public Facebook posts can be used as an accurate measure for predicting present-day voter intention in a multiparty system. We highlight that a few, but selective digital traces produce prediction accuracies that are on par or even greater than most current approaches based upon bigger and broader datasets. Combining the online and offline, we connect a subsample of surveyed respondents to their public Facebook activity and apply machine learning classifiers to explore the link between their political liking behaviour and actual voting intention. Through this work, we show that even a single selective Facebook like can reveal as much about political voter intention as hundreds of heterogeneous likes. Further, by including the entire political like history of the respondents, our model reaches prediction accuracies above previous multiparty studies (60-70%. The main contribution of this paper is to show how public like-activity on Facebook allows political profiling of individual users in a multiparty system with accuracies above previous studies. Beside increased accuracies, the paper shows how such parsimonious measures allows us to generalize our findings to the entire population of a country and even across national borders, to other political multiparty systems. The approach in this study relies on data that are publicly available, and the simple setup we propose can with some limitations, be generalized to millions of users in other multiparty systems.

  15. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection.

    Science.gov (United States)

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments.

  16. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection

    Directory of Open Access Journals (Sweden)

    Chiara Baston

    2015-01-01

    Full Text Available The basal ganglia (BG are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go, indirect (NoGo, and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges, synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication. Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments.

  17. On the treatment of airline travelers in mathematical models.

    Directory of Open Access Journals (Sweden)

    Michael A Johansson

    Full Text Available The global spread of infectious diseases is facilitated by the ability of infected humans to travel thousands of miles in short time spans, rapidly transporting pathogens to distant locations. Mathematical models of the actual and potential spread of specific pathogens can assist public health planning in the case of such an event. Models should generally be parsimonious, but must consider all potentially important components of the system to the greatest extent possible. We demonstrate and discuss important assumptions relative to the parameterization and structural treatment of airline travel in mathematical models. Among other findings, we show that the most common structural treatment of travelers leads to underestimation of the speed of spread and that connecting travel is critical to a realistic spread pattern. Models involving travelers can be improved significantly by relatively simple structural changes but also may require further attention to details of parameterization.

  18. Reconstruction of ancestral RNA sequences under multiple structural constraints

    Directory of Open Access Journals (Sweden)

    Olivier Tremblay-Savard

    2016-11-01

    Full Text Available Abstract Background Secondary structures form the scaffold of multiple sequence alignment of non-coding RNA (ncRNA families. An accurate reconstruction of ancestral ncRNAs must use this structural signal. However, the inference of ancestors of a single ncRNA family with a single consensus structure may bias the results towards sequences with high affinity to this structure, which are far from the true ancestors. Methods In this paper, we introduce achARNement, a maximum parsimony approach that, given two alignments of homologous ncRNA families with consensus secondary structures and a phylogenetic tree, simultaneously calculates ancestral RNA sequences for these two families. Results We test our methodology on simulated data sets, and show that achARNement outperforms classical maximum parsimony approaches in terms of accuracy, but also reduces by several orders of magnitude the number of candidate sequences. To conclude this study, we apply our algorithms on the Glm clan and the FinP-traJ clan from the Rfam database. Conclusions Our results show that our methods reconstruct small sets of high-quality candidate ancestors with better agreement to the two target structures than with classical approaches. Our program is freely available at: http://csb.cs.mcgill.ca/acharnement .

  19. Reconstruction of ancestral RNA sequences under multiple structural constraints

    OpenAIRE

    Tremblay-Savard, Olivier; Reinharz, Vladimir; Waldisp?hl, J?r?me

    2016-01-01

    Background Secondary structures form the scaffold of multiple sequence alignment of non-coding RNA (ncRNA) families. An accurate reconstruction of ancestral ncRNAs must use this structural signal. However, the inference of ancestors of a single ncRNA family with a single consensus structure may bias the results towards sequences with high affinity to this structure, which are far from the true ancestors. Methods In this paper, we introduce achARNement, a maximum parsimony approach that, given...

  20. Modeling age-specific mortality for countries with generalized HIV epidemics.

    Directory of Open Access Journals (Sweden)

    David J Sharrow

    Full Text Available In a given population the age pattern of mortality is an important determinant of total number of deaths, age structure, and through effects on age structure, the number of births and thereby growth. Good mortality models exist for most populations except those experiencing generalized HIV epidemics and some developing country populations. The large number of deaths concentrated at very young and adult ages in HIV-affected populations produce a unique 'humped' age pattern of mortality that is not reproduced by any existing mortality models. Both burden of disease reporting and population projection methods require age-specific mortality rates to estimate numbers of deaths and produce plausible age structures. For countries with generalized HIV epidemics these estimates should take into account the future trajectory of HIV prevalence and its effects on age-specific mortality. In this paper we present a parsimonious model of age-specific mortality for countries with generalized HIV/AIDS epidemics.The model represents a vector of age-specific mortality rates as the weighted sum of three independent age-varying components. We derive the age-varying components from a Singular Value Decomposition of the matrix of age-specific mortality rate schedules. The weights are modeled as a function of HIV prevalence and one of three possible sets of inputs: life expectancy at birth, a measure of child mortality, or child mortality with a measure of adult mortality. We calibrate the model with 320 five-year life tables for each sex from the World Population Prospects 2010 revision that come from the 40 countries of the world that have and are experiencing a generalized HIV epidemic. Cross validation shows that the model is able to outperform several existing model life table systems.We present a flexible, parsimonious model of age-specific mortality for countries with generalized HIV epidemics. Combined with the outputs of existing epidemiological and

  1. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  2. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    KAUST Repository

    Komatitsch, Dimitri; Xie, Zhinan; Bozdağ, Ebru; de Andrade, Elliott Sales; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-01-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  3. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    KAUST Repository

    Komatitsch, Dimitri

    2016-06-13

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  4. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    Science.gov (United States)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  5. The Feeding Practices and Structure Questionnaire (FPSQ-28): A parsimonious version validated for longitudinal use from 2 to 5 years.

    Science.gov (United States)

    Jansen, Elena; Williams, Kate E; Mallan, Kimberley M; Nicholson, Jan M; Daniels, Lynne A

    2016-05-01

    Prospective studies and intervention evaluations that examine change over time assume that measurement tools measure the same construct at each occasion. In the area of parent-child feeding practices, longitudinal measurement properties of the questionnaires used are rarely verified. To ascertain that measured change in feeding practices reflects true change rather than change in the assessment, structure, or conceptualisation of the constructs over time, this study examined longitudinal measurement invariance of the Feeding Practices and Structure Questionnaire (FPSQ) subscales (9 constructs; 40 items) across 3 time points. Mothers participating in the NOURISH trial reported their feeding practices when children were aged 2, 3.7, and 5 years (N = 404). Confirmatory Factor Analysis (CFA) within a structural equation modelling framework was used. Comparisons of initial cross-sectional models followed by longitudinal modelling of subscales, resulted in the removal of 12 items, including two redundant or poorly performing subscales. The resulting 28-item FPSQ-28 comprised 7 multi-item subscales: Reward for Behaviour, Reward for Eating, Persuasive Feeding, Overt Restriction, Covert Restriction, Structured Meal Setting and Structured Meal Timing. All subscales showed good fit over 3 time points and each displayed at least partial scalar (thresholds equal) longitudinal measurement invariance. We recommend the use of a separate single item indicator to assess the family meal setting. This is the first study to examine longitudinal measurement invariance in a feeding practices questionnaire. Invariance was established, indicating that the subscales of the shortened FPSQ-28 can be used with mothers to validly assess change in 7 feeding constructs in samples of children aged 2-5 years of age. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

    Directory of Open Access Journals (Sweden)

    Müller Kai F

    2005-10-01

    Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per

  7. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  8. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  9. Modeling Unobserved Consideration Sets for Household Panel Data

    NARCIS (Netherlands)

    J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans)

    2000-01-01

    textabstractWe propose a new method to model consumers' consideration and choice processes. We develop a parsimonious probit type model for consideration and a multinomial probit model for choice, given consideration. Unlike earlier models of consideration ours is not prone to the curse of

  10. An integer programming formulation of the parsimonious loss of heterozygosity problem.

    Science.gov (United States)

    Catanzaro, Daniele; Labbé, Martine; Halldórsson, Bjarni V

    2013-01-01

    A loss of heterozygosity (LOH) event occurs when, by the laws of Mendelian inheritance, an individual should be heterozygote at a given site but, due to a deletion polymorphism, is not. Deletions play an important role in human disease and their detection could provide fundamental insights for the development of new diagnostics and treatments. In this paper, we investigate the parsimonious loss of heterozygosity problem (PLOHP), i.e., the problem of partitioning suspected polymorphisms from a set of individuals into a minimum number of deletion areas. Specifically, we generalize Halldórsson et al.'s work by providing a more general formulation of the PLOHP and by showing how one can incorporate different recombination rates and prior knowledge about the locations of deletions. Moreover, we show that the PLOHP can be formulated as a specific version of the clique partition problem in a particular class of graphs called undirected catch-point interval graphs and we prove its general $({\\cal NP})$-hardness. Finally, we provide a state-of-the-art integer programming (IP) formulation and strengthening valid inequalities to exactly solve real instances of the PLOHP containing up to 9,000 individuals and 3,000 SNPs. Our results give perspectives on the mathematics of the PLOHP and suggest new directions on the development of future efficient exact solution approaches.

  11. Can the dynamics of the term structure of petroleum futures be forecasted? Evidence from major markets

    International Nuclear Information System (INIS)

    Skiadopoulos, George; Chantziara, Thalia

    2008-01-01

    We investigate whether the daily evolution of the term structure of petroleum futures can be forecasted. To this end, the principal components analysis is employed. The retained principal components describe the dynamics of the term structure of futures prices parsimoniously and are used to forecast the subsequent daily changes of futures prices. Data on the New York Mercantile Exchange (NYMEX) crude oil, heating oil, gasoline, and the International Petroleum Exchange (IPE) crude oil futures are used. We find that the retained principal components have small forecasting power both in-sample and out-of-sample. Similar results are obtained from standard univariate and vector autoregression models. Spillover effects between the four petroleum futures markets are also detected. (author)

  12. A critical review of integrated urban water modelling – Urban drainage and beyond

    DEFF Research Database (Denmark)

    Bach, Peter M.; Rauch, Wolfgang; Mikkelsen, Peter Steen

    2014-01-01

    considerations (e.g. data issues, model structure, computational and integration-related aspects), common methodology for model development (through a systems approach), calibration/optimisation and uncertainty are discussed, placing importance on pragmatism and parsimony. Integrated urban water models should......Modelling interactions in urban drainage, water supply and broader integrated urban water systems has been conceptually and logistically challenging as evidenced in a diverse body of literature, found to be confusing and intimidating to new researchers. This review consolidates thirty years...... of research (initially driven by interest in urban drainage modelling) and critically reflects upon integrated modelling in the scope of urban water systems. We propose a typology to classify integrated urban water system models at one of four ‘degrees of integration’ (followed by its exemplification). Key...

  13. Disorders without borders: current and future directions in the meta-structure of mental disorders.

    Science.gov (United States)

    Carragher, Natacha; Krueger, Robert F; Eaton, Nicholas R; Slade, Tim

    2015-03-01

    Classification is the cornerstone of clinical diagnostic practice and research. However, the extant psychiatric classification systems are not well supported by research evidence. In particular, extensive comorbidity among putatively distinct disorders flags an urgent need for fundamental changes in how we conceptualize psychopathology. Over the past decade, research has coalesced on an empirically based model that suggests many common mental disorders are structured according to two correlated latent dimensions: internalizing and externalizing. We review and discuss the development of a dimensional-spectrum model which organizes mental disorders in an empirically based manner. We also touch upon changes in the DSM-5 and put forward recommendations for future research endeavors. Our review highlights substantial empirical support for the empirically based internalizing-externalizing model of psychopathology, which provides a parsimonious means of addressing comorbidity. As future research goals, we suggest that the field would benefit from: expanding the meta-structure of psychopathology to include additional disorders, development of empirically based thresholds, inclusion of a developmental perspective, and intertwining genomic and neuroscience dimensions with the empirical structure of psychopathology.

  14. Modelling asymmetric persistence over the business cycle

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1998-01-01

    textabstractWe address the issue of time varying persistence of shocks to macroeconomic time series variables by proposing a new and parsimonious time series model. Our model assumes that this time varying persistence depends on a linear combination of lagged explanatory variables, where this

  15. Between Complexity and Parsimony: Can Agent-Based Modelling Resolve the Trade-off

    DEFF Research Database (Denmark)

    Nielsen, Helle Ørsted; Malawska, Anna Katarzyna

    2013-01-01

    to BR- based policy studies would be to couple research on bounded ra-tionality with agent-based modeling. Agent-based models (ABMs) are computational models for simulating the behavior and interactions of any number of decision makers in a dynamic system. Agent-based models are better suited than...... are general equilibrium models for capturing behavior patterns of complex systems. ABMs may have the potential to represent complex systems without oversimplifying them. At the same time, research in bounded rationality and behavioral economics has already yielded many insights that could inform the modeling......While Herbert Simon espoused development of general models of behavior, he also strongly advo-cated that these models be based on realistic assumptions about humans and therefore reflect the complexity of human cognition and social systems (Simon 1997). Hence, the model of bounded rationality...

  16. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  17. Activity systems modeling as a theoretical lens for social exchange studies

    Directory of Open Access Journals (Sweden)

    Ernest Jones

    2016-01-01

    Full Text Available The social exchange perspective seeks to acknowledge, understand and predict the dynamics of social interactions. Empirical research involving social exchange constructs have grown to be highly technical including confirmatory factor analysis to assess construct distinctiveness and structural equation modeling to assess construct causality. Each study seemingly strives to assess how underlying social exchange theoretic constructs interrelate. Yet despite this methodological depth and resultant explanatory and predictive power, a significant number of studies report findings that, once synthesized, suggest an underlying persistent threat of conceptual or construct validity brought about by a search for epistemological parsimony. Further, it is argued that a methodological approach that embraces inherent complexity such as activity systems modeling facilitates the search for simplified models while not ignoring contextual factors.

  18. Exploring the Factor Structure of the Job Demands-Resources Measure With Patient Violence on Direct Care Workers in the Home Setting.

    Science.gov (United States)

    Byon, Ha Do; Harrington, Donna; Storr, Carla L; Lipscomb, Jane

    2017-08-01

    Workplace violence research in health care settings using the Job Demands-Resources (JD-R) framework is hindered by the lack of comprehensive examination of the factor structure of the JD-R measure when it includes patient violence. Is patient violence a component of job demands or its own factor as an occupational outcome? Exploratory factor analysis and confirmatory factor analysis were conducted using a sample of direct care workers in the home setting (n = 961). The overall 2-construct JD-R structure persisted. Patient violence was not identified as a separate factor from job demands; rather, two demand factors emerged: violence/emotional and workload/physical demands. Although the three-factor model fits the data, the two-factor model with patient violence being a component of job demands is a parsimonious and effective measurement framework.

  19. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions.

    Science.gov (United States)

    Verdam, Mathilde G E; Oort, Frans J

    2014-01-01

    Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data.A method for the investigation of measurement bias with Kronecker product restricted models.Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions.The use of curves to facilitate substantive interpretation of apparent measurement bias.Assessment of change in common factor means, after accounting for apparent measurement bias.Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks.

  20. A Range-Based Multivariate Model for Exchange Rate Volatility

    NARCIS (Netherlands)

    B. Tims (Ben); R.J. Mahieu (Ronald)

    2003-01-01

    textabstractIn this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model divides the log range of each exchange rate into two independent latent factors, which are

  1. Assessing uncertainties in global cropland futures using a conditional probabilistic modelling framework

    NARCIS (Netherlands)

    Engström, Kerstin; Olin, Stefan; Rounsevell, Mark D A; Brogaard, Sara; Van Vuuren, Detlef P.; Alexander, Peter; Murray-Rust, Dave; Arneth, Almut

    2016-01-01

    We present a modelling framework to simulate probabilistic futures of global cropland areas that are conditional on the SSP (shared socio-economic pathway) scenarios. Simulations are based on the Parsimonious Land Use Model (PLUM) linked with the global dynamic vegetation model LPJ-GUESS

  2. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  3. Identifying the important factors in simulation models with many factors

    NARCIS (Netherlands)

    Bettonvil, B.; Kleijnen, J.P.C.

    1994-01-01

    Simulation models may have many parameters and input variables (together called factors), while only a few factors are really important (parsimony principle). For such models this paper presents an effective and efficient screening technique to identify and estimate those important factors. The

  4. Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment.

    Science.gov (United States)

    Yamaguchi, Kazuhiro; Okada, Kensuke

    2018-01-01

    A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students' achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.

  5. Modeling mental spatial reasoning about cardinal directions.

    Science.gov (United States)

    Schultheis, Holger; Bertel, Sven; Barkowsky, Thomas

    2014-01-01

    This article presents research into human mental spatial reasoning with orientation knowledge. In particular, we look at reasoning problems about cardinal directions that possess multiple valid solutions (i.e., are spatially underdetermined), at human preferences for some of these solutions, and at representational and procedural factors that lead to such preferences. The article presents, first, a discussion of existing, related conceptual and computational approaches; second, results of empirical research into the solution preferences that human reasoners actually have; and, third, a novel computational model that relies on a parsimonious and flexible spatio-analogical knowledge representation structure to robustly reproduce the behavior observed with human reasoners. Copyright © 2014 Cognitive Science Society, Inc.

  6. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    Science.gov (United States)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  7. Dynamics of pesticide uptake into plants: From system functioning to parsimonious modeling

    DEFF Research Database (Denmark)

    Fantke, Peter; Wieland, Peter; Wannaz, Cedric

    2013-01-01

    Dynamic plant uptake models are suitable for assessing environmental fate and behavior of toxic chemicals in food crops. However, existing tools mostly lack in-depth analysis of system dynamics. Furthermore, no existing model is available as parameterized version that is easily applicable for use...

  8. Using genes as characters and a parsimony analysis to explore the phylogenetic position of turtles.

    Directory of Open Access Journals (Sweden)

    Bin Lu

    Full Text Available The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a "genes as characters" approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis, similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as

  9. Using Genes as Characters and a Parsimony Analysis to Explore the Phylogenetic Position of Turtles

    Science.gov (United States)

    Lu, Bin; Yang, Weizhao; Dai, Qiang; Fu, Jinzhong

    2013-01-01

    The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a “genes as characters” approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog) and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara) but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis), similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as characters

  10. Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment.

    Directory of Open Access Journals (Sweden)

    Kazuhiro Yamaguchi

    Full Text Available A variety of cognitive diagnostic models (CDMs have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students' achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.

  11. Integrative structure modeling with the Integrative Modeling Platform.

    Science.gov (United States)

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  12. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  13. Integrated materials–structural models

    DEFF Research Database (Denmark)

    Stang, Henrik; Geiker, Mette Rica

    2008-01-01

    , repair works and strengthening methods for structures. A very significant part of the infrastructure consists of reinforced concrete structures. Even though reinforced concrete structures typically are very competitive, certain concrete structures suffer from various types of degradation. A framework...... should define a framework in which materials research results eventually should fit in and on the other side the materials research should define needs and capabilities in structural modelling. Integrated materials-structural models of a general nature are almost non-existent in the field of cement based...

  14. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions

    Science.gov (United States)

    Verdam, Mathilde G. E.; Oort, Frans J.

    2014-01-01

    Highlights Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions. The use of curves to facilitate substantive interpretation of apparent measurement bias. Assessment of change in common factor means, after accounting for apparent measurement bias. Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks. PMID:25295016

  15. Sparse estimation of polynomial dynamical models

    NARCIS (Netherlands)

    Toth, R.; Hjalmarsson, H.; Rojas, C.R.; Kinnaert, M.

    2012-01-01

    In many practical situations, it is highly desirable to estimate an accurate mathematical model of a real system using as few parameters as possible. This can be motivated either from appealing to a parsimony principle (Occam's razor) or from the view point of the utilization complexity in terms of

  16. The spatial representation of market information

    NARCIS (Netherlands)

    DeSarbo, WS; Degeratu, AM; Wedel, M; Saxton, MK

    2001-01-01

    To be used effectively, market knowledge and information must be structured and represented in ways that are parsimonious and conducive to efficient managerial decision making. This manuscript proposes a new latent structure spatial model for the representation of market information that meets this

  17. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    Science.gov (United States)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  18. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...

  19. Assessing a five factor model of PTSD: is dysphoric arousal a unique PTSD construct showing differential relationships with anxiety and depression?

    Science.gov (United States)

    Armour, Cherie; Elhai, Jon D; Richardson, Don; Ractliffe, Kendra; Wang, Li; Elklit, Ask

    2012-03-01

    Posttraumatic stress disorder's (PTSD) latent structure has been widely debated. To date, two four-factor models (Numbing and Dysphoria) have received the majority of factor analytic support. Recently, Elhai et al. (2011) proposed and supported a revised (five-factor) Dysphoric Arousal model. Data were gathered from two separate samples; War veterans and Primary Care medical patients. The three models were compared and the resultant factors of the Dysphoric Arousal model were validated against external constructs of depression and anxiety. The Dysphoric Arousal model provided significantly better fit than the Numbing and Dysphoria models across both samples. When differentiating between factors, the current results support the idea that Dysphoric Arousal can be differentiated from Anxious Arousal but not from Emotional Numbing when correlated with depression. In conclusion, the Dysphoria model may be a more parsimonious representation of PTSD's latent structure in these trauma populations despite superior fit of the Dysphoric Arousal model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Hybrid modelling of soil-structure interaction for embedded structures

    International Nuclear Information System (INIS)

    Gupta, S.; Penzien, J.

    1981-01-01

    The basic methods currently being used for the analysis of soil-structure interaction fail to properly model three-dimensional embedded structures with flexible foundations. A hybrid model for the analysis of soil-structure interaction is developed in this investigation which takes advantage of the desirable features of both the finite element and substructure methods and which minimizes their undesirable features. The hybrid model is obtained by partitioning the total soil-structure system into a nearfield and a far-field with a smooth hemispherical interface. The near-field consists of the structure and a finite region of soil immediately surrounding its base. The entire near-field may be modelled in three-dimensional form using the finite element method; thus, taking advantage of its ability to model irregular geometries, and the non-linear soil behavior in the immediate vicinity of the structure. (orig./WL)

  1. Automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

    Science.gov (United States)

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of applications. Since the usefulness of a model for specific application is determined by its accuracy, model quality estimation is an essential component of protein structure prediction. Comparative protein modeling has become a routine approach in many areas of life science research since fully automated modeling systems allow also nonexperts to build reliable models. In this chapter, we describe practical approaches for automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

  2. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    Science.gov (United States)

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  3. Automated Protein Structure Modeling with SWISS-MODEL Workspace and the Protein Model Portal

    OpenAIRE

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of appl...

  4. Right-sizing statistical models for longitudinal data.

    Science.gov (United States)

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  5. Oscillating water column structural model

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.

  6. Capital Structure: Target Adjustment Model and a Mediation Moderation Model with Capital Structure as Mediator

    OpenAIRE

    Abedmajid, Mohammed

    2015-01-01

    This study consists of two models. Model one is conducted to check if there is a target adjustment toward optimal capital structure, in the context of Turkish firm listed on the stock market, over the period 2003-2014. Model 2 captures the interaction between firm size, profitability, market value and capital structure using the moderation mediation model. The results of model 1 have shown that there is a partial adjustment of the capital structure to reach target levels. The results of...

  7. Per Aspera ad Astra: Through Complex Population Modeling to Predictive Theory.

    Science.gov (United States)

    Topping, Christopher J; Alrøe, Hugo Fjelsted; Farrell, Katharine N; Grimm, Volker

    2015-11-01

    Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tie models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that are included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.

  8. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  9. Modeling spatial processes with unknown extremal dependence class

    KAUST Repository

    Huser, Raphaël G.

    2017-03-17

    Many environmental processes exhibit weakening spatial dependence as events become more extreme. Well-known limiting models, such as max-stable or generalized Pareto processes, cannot capture this, which can lead to a preference for models that exhibit a property known as asymptotic independence. However, weakening dependence does not automatically imply asymptotic independence, and whether the process is truly asymptotically (in)dependent is usually far from clear. The distinction is key as it can have a large impact upon extrapolation, i.e., the estimated probabilities of events more extreme than those observed. In this work, we present a single spatial model that is able to capture both dependence classes in a parsimonious manner, and with a smooth transition between the two cases. The model covers a wide range of possibilities from asymptotic independence through to complete dependence, and permits weakening dependence of extremes even under asymptotic dependence. Censored likelihood-based inference for the implied copula is feasible in moderate dimensions due to closed-form margins. The model is applied to oceanographic datasets with ambiguous true limiting dependence structure.

  10. A Range-Based Multivariate Model for Exchange Rate Volatility

    OpenAIRE

    Tims, Ben; Mahieu, Ronald

    2003-01-01

    textabstractIn this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model divides the log range of each exchange rate into two independent latent factors, which are interpreted as the underlying currency specific components. Due to the normality of logarithmic volatilities the model can be estimated conveniently with standard Kalman filter techniques. Our resu...

  11. Adding thin-ideal internalization and impulsiveness to the cognitive-behavioral model of bulimic symptoms.

    Science.gov (United States)

    Schnitzler, Caroline E; von Ranson, Kristin M; Wallace, Laurel M

    2012-08-01

    This study evaluated the cognitive-behavioral (CB) model of bulimia nervosa and an extension that included two additional maintaining factors - thin-ideal internalization and impulsiveness - in 327 undergraduate women. Participants completed measures of demographics, self-esteem, concern about shape and weight, dieting, bulimic symptoms, thin-ideal internalization, and impulsiveness. Both the original CB model and the extended model provided good fits to the data. Although structural equation modeling analyses suggested that the original CB model was most parsimonious, hierarchical regression analyses indicated that the additional variables accounted for significantly more variance. Additional analyses showed that the model fit could be improved by adding a path from concern about shape and weight, and deleting the path from dieting, to bulimic symptoms. Expanding upon the factors considered in the model may better capture the scope of variables maintaining bulimic symptoms in young women with a range of severity of bulimic symptoms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Parametric structural modeling of insect wings

    International Nuclear Information System (INIS)

    Mengesha, T E; Vallance, R R; Barraja, M; Mittal, R

    2009-01-01

    Insects produce thrust and lift forces via coupled fluid-structure interactions that bend and twist their compliant wings during flapping cycles. Insight into this fluid-structure interaction is achieved with numerical modeling techniques such as coupled finite element analysis and computational fluid dynamics, but these methods require accurate and validated structural models of insect wings. Structural models of insect wings depend principally on the shape, dimensions and material properties of the veins and membrane cells. This paper describes a method for parametric modeling of wing geometry using digital images and demonstrates the use of the geometric models in constructing three-dimensional finite element (FE) models and simple reduced-order models. The FE models are more complete and accurate than previously reported models since they accurately represent the topology of the vein network, as well as the shape and dimensions of the veins and membrane cells. The methods are demonstrated by developing a parametric structural model of a cicada forewing.

  13. A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.

    Science.gov (United States)

    Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F

    2017-11-01

    The dominant approach to neuroimaging data analysis employs the voxel as the unit of computation. While convenient, voxels lack biological meaning and their size is arbitrarily determined by the resolution of the image. Here, we propose a multivariate spatial model in which neuroimaging data are characterised as a linearly weighted combination of multiscale basis functions which map onto underlying brain nuclei or networks or nuclei. In this model, the elementary building blocks are derived to reflect the functional anatomy of the brain during the resting state. This model is estimated using a Bayesian framework which accurately quantifies uncertainty and automatically finds the most accurate and parsimonious combination of basis functions describing the data. We demonstrate the utility of this framework by predicting quantitative SPECT images of striatal dopamine function and we compare a variety of basis sets including generic isotropic functions, anatomical representations of the striatum derived from structural MRI, and two different soft functional parcellations of the striatum derived from resting-state fMRI (rfMRI). We found that a combination of ∼50 multiscale functional basis functions accurately represented the striatal dopamine activity, and that functional basis functions derived from an advanced parcellation technique known as Instantaneous Connectivity Parcellation (ICP) provided the most parsimonious models of dopamine function. Importantly, functional basis functions derived from resting fMRI were more accurate than both structural and generic basis sets in representing dopamine function in the striatum for a fixed model order. We demonstrate the translational validity of our framework by constructing classification models for discriminating parkinsonian disorders and their subtypes. Here, we show that ICP approach is the only basis set that performs well across all comparisons and performs better overall than the classical voxel-based approach

  14. Structure functions from chiral soliton models

    International Nuclear Information System (INIS)

    Weigel, H.; Reinhardt, H.; Gamberg, L.

    1997-01-01

    We study nucleon structure functions within the bosonized Nambu-Jona-Lasinio (NJL) model where the nucleon emerges as a chiral soliton. We discuss the model predictions on the Gottfried sum rule for electron-nucleon scattering. A comparison with a low-scale parametrization shows that the model reproduces the gross features of the empirical structure functions. We also compute the leading twist contributions of the polarized structure functions g 1 and g 2 in this model. We compare the model predictions on these structure functions with data from the E143 experiment by GLAP evolving them from the scale characteristic for the NJL-model to the scale of the data

  15. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  16. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model.

    Science.gov (United States)

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van den Berg, Helma; Conner, Mark; van der Maas, Han L J

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study. (c) 2015 APA, all rights reserved).

  17. Time series modelling of global mean temperature for managerial decision-making.

    Science.gov (United States)

    Romilly, Peter

    2005-07-01

    Climate change has important implications for business and economic activity. Effective management of climate change impacts will depend on the availability of accurate and cost-effective forecasts. This paper uses univariate time series techniques to model the properties of a global mean temperature dataset in order to develop a parsimonious forecasting model for managerial decision-making over the short-term horizon. Although the model is estimated on global temperature data, the methodology could also be applied to temperature data at more localised levels. The statistical techniques include seasonal and non-seasonal unit root testing with and without structural breaks, as well as ARIMA and GARCH modelling. A forecasting evaluation shows that the chosen model performs well against rival models. The estimation results confirm the findings of a number of previous studies, namely that global mean temperatures increased significantly throughout the 20th century. The use of GARCH modelling also shows the presence of volatility clustering in the temperature data, and a positive association between volatility and global mean temperature.

  18. Intelligent-based Structural Damage Detection Model

    International Nuclear Information System (INIS)

    Lee, Eric Wai Ming; Yu, K.F.

    2010-01-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  19. Intelligent-based Structural Damage Detection Model

    Science.gov (United States)

    Lee, Eric Wai Ming; Yu, Kin Fung

    2010-05-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  20. A Comparison of Competing Models for Understanding Industrial Organization’s Acceptance of Cloud Services

    Directory of Open Access Journals (Sweden)

    Shui-Lien Chen

    2018-03-01

    Full Text Available Cloud computing is the next generation in computing, and the next natural step in the evolution of on-demand information technology services and products. However, only a few studies have addressed the adoption of cloud computing from an organizational perspective, which have not proven whether the research model is the best-fitting model. The purpose of this paper is to construct research competing models (RCMs and determine the best-fitting model for understanding industrial organization’s acceptance of cloud services. This research integrated the technology acceptance model and the principle of model parsimony to develop four cloud service adoption RCMs with enterprise usage intention being used as a proxy for actual behavior, and then compared the RCMs using structural equation modeling (SEM. Data derived from a questionnaire-based survey of 227 firms in Taiwan were tested against the relationships through SEM. Based on the empirical study, the results indicated that, although all four RCMs had a high goodness of fit, in both nested and non-nested structure comparisons, research competing model A (Model A demonstrated superior performance and was the best-fitting model. This study introduced a model development strategy that can most accurately explain and predict the behavioral intention of organizations to adopt cloud services.

  1. Structure-Function Network Mapping and Its Assessment via Persistent Homology

    Science.gov (United States)

    2017-01-01

    Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127

  2. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... Publication: www.jcss.ethz.ch; 2001] and of the COST action E24 ‘Reliability of Timber Structures' [COST Action E 24, Reliability of timber structures. Several meetings and Publications, Internet Publication: http://www.km.fgg.uni-lj.si/coste24/coste24.htm; 2005]. The present proposal is based on discussions...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...

  3. Structure of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Criteria for Obsessive–Compulsive Personality Disorder in Patients With Binge Eating Disorder

    Science.gov (United States)

    Ansell, Emily B; Pinto, Anthony; Edelen, Maria Orlando; Grilo, Carlos M

    2013-01-01

    Objective To examine 1-, 2-, and 3-factor model structures through confirmatory analytic procedures for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) obsessive–compulsive personality disorder (OCPD) criteria in patients with binge eating disorder (BED). Method Participants were consecutive outpatients (n = 263) with binge eating disorder and were assessed with semi-structured interviews. The 8 OCPD criteria were submitted to confirmatory factor analyses in Mplus Version 4.2 (Los Angeles, CA) in which previously identified factor models of OCPD were compared for fit, theoretical relevance, and parsimony. Nested models were compared for significant improvements in model fit. Results Evaluation of indices of fit in combination with theoretical considerations suggest a multifactorial model is a significant improvement in fit over the current DSM-IV single-factor model of OCPD. Though the data support both 2- and 3-factor models, the 3-factor model is hindered by an underspecified third factor. Conclusion A multifactorial model of OCPD incorporating the factors perfectionism and rigidity represents the best compromise of fit and theory in modelling the structure of OCPD in patients with BED. A third factor representing miserliness may be relevant in BED populations but needs further development. The perfectionism and rigidity factors may represent distinct intrapersonal and interpersonal attempts at control and may have implications for the assessment of OCPD. PMID:19087485

  4. Structure of diagnostic and statistical manual of mental disorders, fourth edition criteria for obsessive-compulsive personality disorder in patients with binge eating disorder.

    Science.gov (United States)

    Ansell, Emily B; Pinto, Anthony; Edelen, Maria Orlando; Grilo, Carlos M

    2008-12-01

    To examine 1-, 2-, and 3-factor model structures through confirmatory analytic procedures for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) obsessive-compulsive personality disorder (OCPD) criteria in patients with binge eating disorder (BED). Participants were consecutive outpatients (n = 263) with binge eating disorder and were assessed with semi-structured interviews. The 8 OCPD criteria were submitted to confirmatory factor analyses in Mplus Version 4.2 (Los Angeles, CA) in which previously identified factor models of OCPD were compared for fit, theoretical relevance, and parsimony. Nested models were compared for significant improvements in model fit. Evaluation of indices of fit in combination with theoretical considerations suggest a multifactorial model is a significant improvement in fit over the current DSM-IV single- factor model of OCPD. Though the data support both 2- and 3-factor models, the 3-factor model is hindered by an underspecified third factor. A multifactorial model of OCPD incorporating the factors perfectionism and rigidity represents the best compromise of fit and theory in modelling the structure of OCPD in patients with BED. A third factor representing miserliness may be relevant in BED populations but needs further development. The perfectionism and rigidity factors may represent distinct intrapersonal and interpersonal attempts at control and may have implications for the assessment of OCPD.

  5. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  6. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  7. Spatial Statistical Network Models for Stream and River Temperatures in the Chesapeake Bay Watershed

    Science.gov (United States)

    Numerous metrics have been proposed to describe stream/river thermal regimes, and researchers are still struggling with the need to describe thermal regimes in a parsimonious fashion. Regional temperature models are needed for characterizing and mapping current stream thermal re...

  8. Numerical Modelling of Structures with Uncertainties

    Directory of Open Access Journals (Sweden)

    Kahsin Maciej

    2017-04-01

    Full Text Available The nature of environmental interactions, as well as large dimensions and complex structure of marine offshore objects, make designing, building and operation of these objects a great challenge. This is the reason why a vast majority of investment cases of this type include structural analysis, performed using scaled laboratory models and complemented by extended computer simulations. The present paper focuses on FEM modelling of the offshore wind turbine supporting structure. Then problem is studied using the modal analysis, sensitivity analysis, as well as the design of experiment (DOE and response surface model (RSM methods. The results of modal analysis based simulations were used for assessing the quality of the FEM model against the data measured during the experimental modal analysis of the scaled laboratory model for different support conditions. The sensitivity analysis, in turn, has provided opportunities for assessing the effect of individual FEM model parameters on the dynamic response of the examined supporting structure. The DOE and RSM methods allowed to determine the effect of model parameter changes on the supporting structure response.

  9. Soil Retaining Structures : Development of models for structural analysis

    NARCIS (Netherlands)

    Bakker, K.J.

    2000-01-01

    The topic of this thesis is the development of models for the structural analysis of soil retaining structures. The soil retaining structures being looked at are; block revetments, flexible retaining walls and bored tunnels in soft soil. Within this context typical structural behavior of these

  10. A state-dependent model for inflation forecasting

    OpenAIRE

    Andrea Stella; James H. Stock

    2012-01-01

    We develop a parsimonious bivariate model of inflation and unemployment that allows for persistent variation in trend inflation and the NAIRU. The model, which consists of five unobserved components (including the trends) with stochastic volatility, implies a time-varying VAR for changes in the rates of inflation and unemployment. The implied backwards-looking Phillips curve has a time-varying slope that is steeper in the 1970s than in the 1990s. Pseudo out-of-sample forecasting experiments i...

  11. Accurate protein structure modeling using sparse NMR data and homologous structure information.

    Science.gov (United States)

    Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David

    2012-06-19

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.

  12. Mortgage Risk and the Yield Curve

    DEFF Research Database (Denmark)

    Malkhozov, Aytek; Mueller, Philippe; Vedolin, Andrea

    2016-01-01

    We study feedback from the risk of outstanding mortgage-backed securities (MBS) on the level and volatility of interest rates. We incorporate supply shocks resulting from changes in MBS duration into a parsimonious equilibrium dynamic term structure model and derive three predictions...

  13. Parsimonious classification of binary lacunarity data computed from food surface images using kernel principal component analysis and artificial neural networks.

    Science.gov (United States)

    Iqbal, Abdullah; Valous, Nektarios A; Sun, Da-Wen; Allen, Paul

    2011-02-01

    Lacunarity is about quantifying the degree of spatial heterogeneity in the visual texture of imagery through the identification of the relationships between patterns and their spatial configurations in a two-dimensional setting. The computed lacunarity data can designate a mathematical index of spatial heterogeneity, therefore the corresponding feature vectors should possess the necessary inter-class statistical properties that would enable them to be used for pattern recognition purposes. The objectives of this study is to construct a supervised parsimonious classification model of binary lacunarity data-computed by Valous et al. (2009)-from pork ham slice surface images, with the aid of kernel principal component analysis (KPCA) and artificial neural networks (ANNs), using a portion of informative salient features. At first, the dimension of the initial space (510 features) was reduced by 90% in order to avoid any noise effects in the subsequent classification. Then, using KPCA, the first nineteen kernel principal components (99.04% of total variance) were extracted from the reduced feature space, and were used as input in the ANN. An adaptive feedforward multilayer perceptron (MLP) classifier was employed to obtain a suitable mapping from the input dataset. The correct classification percentages for the training, test and validation sets were 86.7%, 86.7%, and 85.0%, respectively. The results confirm that the classification performance was satisfactory. The binary lacunarity spatial metric captured relevant information that provided a good level of differentiation among pork ham slice images. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  14. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  15. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    Science.gov (United States)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

  16. Development of the tube bundle structure for fluid-structure interaction analysis model

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kim, Jae Yong

    2010-02-01

    Tube bundle structures within a Boiler or heat exchanger are laid the fluid-structure, thermal-structure and fluid-thermal-structure coupled boundary condition. In these complicated boundary conditions, Fluid-structure interaction (FSI) occurs when fluid flow causes deformation of the structure. This deformation, in turn, changes the boundary conditions for the fluid flow. The structural analysis discipline, and then independently analyzed each other. However, the fluid dynamic force effect the behavior of the structure, and the vibration amplitude of the structure to fluid. FSI analysis model was separately created fluid and structure model, and then defined the fsi boundary condition, and simultaneously analyzed in one domain. The analysis results were compared with those of the experimental method for validating the analysis model. Flow-induced vibration test was executed with single rod configuration. The vibration amplitudes of a fuel rod were measured by the laser vibro-meter system in x and y-direction. The analyses results were not closely with the test data, but the trend was very similar with the test result. In fsi coupled analysis case, the turbulent model was very important with the reliability of the accuracy of the analysis model. Therefore, the analysis model will be needed to further study

  17. Replication and validation of higher order models demonstrated that a summary score for the EORTC QLQ-C30 is robust

    DEFF Research Database (Denmark)

    Giesinger, Johannes M.; Kieffer, Jacobien M.; Fayers, Peter M.

    2016-01-01

    OBJECTIVE: To further evaluate the higher order measurement structure of the European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire Core 30 (QLQ-C30), with the aim of generating a summary score. STUDY DESIGN AND SETTING: Using pretreatment QLQ-C30 data (N...... = 3,282), we conducted confirmatory factor analyses to test seven previously evaluated higher order models. We compared the summary score(s) derived from the best performing higher order model with the original QLQ-C30 scale scores, using tumor stage, performance status, and change over time (N = 244......) as grouping variables. RESULTS: Although all models showed acceptable fit, we continued in the interest of parsimony with known-groups validity and responsiveness analyses using a summary score derived from the single higher order factor model. The validity and responsiveness of this QLQ-C30 summary score...

  18. Comparison of the new intermediate complex atmospheric research (ICAR) model with the WRF model in a mesoscale catchment in Central Europe

    Science.gov (United States)

    Härer, Stefan; Bernhardt, Matthias; Gutmann, Ethan; Bauer, Hans-Stefan; Schulz, Karsten

    2017-04-01

    Until recently, a large gap existed in the atmospheric downscaling strategies. On the one hand, computationally efficient statistical approaches are widely used, on the other hand, dynamic but CPU-intensive numeric atmospheric models like the weather research and forecast (WRF) model exist. The intermediate complex atmospheric research (ICAR) model developed at NCAR (Boulder, Colorado, USA) addresses this gap by combining the strengths of both approaches: the process-based structure of a dynamic model and its applicability in a changing climate as well as the speed of a parsimonious modelling approach which facilitates the modelling of ensembles and a straightforward way to test new parametrization schemes as well as various input data sources. However, the ICAR model has not been tested in Europe and on slightly undulated terrain yet. This study now evaluates for the first time the ICAR model to WRF model runs in Central Europe comparing a complete year of model results in the mesoscale Attert catchment (Luxembourg). In addition to these modelling results, we also describe the first implementation of ICAR on an Intel Phi architecture and consequently perform speed tests between the Vienna cluster, a standard workstation and the use of an Intel Phi coprocessor. Finally, the study gives an outlook on sensitivity studies using slightly different input data sources.

  19. Modeling and identification in structural dynamics

    OpenAIRE

    Jayakumar, Paramsothy

    1987-01-01

    Analytical modeling of structures subjected to ground motions is an important aspect of fully dynamic earthquake-resistant design. In general, linear models are only sufficient to represent structural responses resulting from earthquake motions of small amplitudes. However, the response of structures during strong ground motions is highly nonlinear and hysteretic. System identification is an effective tool for developing analytical models from experimental data. Testing of full-scale prot...

  20. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  1. Introduction to the special issue: parsimony and redundancy in models of language.

    Science.gov (United States)

    Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian

    2013-09-01

    One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;

  2. Urban-Related Environmental Variables and Their Relation with Patterns in Biological Community Structure in the Fountain Creek Basin, Colorado, 2003-2005

    Science.gov (United States)

    Zuellig, Robert E.; Bruce, James F.; Evans, Erin E.; Stogner, Sr., Robert W.

    2007-01-01

    In 2003, the U.S. Geological Survey, in cooperation with Colorado Springs City Engineering, began a study to evaluate the influence of urbanization on stream ecosystems. To accomplish this task, invertebrate, fish, stream discharge, habitat, water-chemistry, and land-use data were collected from 13 sites in the Fountain Creek basin from 2003 to 2005. The Hydrologic Index Tool was used to calculate hydrologic indices known to be related to urbanization. Response of stream hydrology to urbanization was evident among hydrologic variables that described stormflow. These indices included one measurement of high-flow magnitude, two measurements of high-flow frequency, and one measurement of stream flashiness. Habitat and selected nonstormflow water chemistry were characterized at each site. Land-use data were converted to estimates of impervious surface cover and used as the measure of urbanization annually. Correlation analysis (Spearman?s rho) was used to identify a suite of nonredundant streamflow, habitat, and water-chemistry variables that were strongly associated (rho > 0.6) with impervious surface cover but not strongly related to elevation (rho analysis (BIO-ENV, PRIMER ver 6.1, Plymouth, UK) was used to create subsets of eight urban-related environmental variables that described patterns in biological community structure. The strongest and most parsimonious subset of variables describing patterns in invertebrate community structure included high flood pulse count, lower bank capacity, and nutrients. Several other combinations of environmental variables resulted in competing subsets, but these subsets always included the three variables found in the most parsimonious list. This study found that patterns in invertebrate community structure from 2003 to 2005 in the Fountain Creek basin were associated with a variety of environmental characteristics influenced by urbanization. These patterns were explained by a combination of hydrologic, habitat, and water

  3. Test of the Fishbein and Ajzen models as predictors of health care workers' glove use.

    Science.gov (United States)

    Levin, P F

    1999-08-01

    The aim of this study was to identify predictors of health care workers' glove use when there is a potential for blood exposure. The study hypothesis was that an extension of the theory of planned behavior would explain more of the variance in glove use behavior than the theory of reasoned action or theory of planned behavior. A random sample of nurses and laboratory workers (N = 527) completed a 26-item questionnaire with acceptable content validity and reliability estimates. Using structural equation modeling techniques, intention, attitude, and perceived risk were significant predictors of behavior. Perceived control and attitude were the significant determinants of intention. The theory of reasoned action was the most parsimonious model, explaining 70% of the variance in glove use behavior. The theory of planned behavior extension was a viable model to study behavior related to glove use and reducing workers' risks to bloodborne diseases.

  4. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  5. Hydrologic behaviour of the Lake of Monate (Italy): a parsimonious modelling strategy

    Science.gov (United States)

    Tomesani, Giulia; Soligno, Irene; Castellarin, Attilio; Baratti, Emanuele; Cervi, Federico; Montanari, Alberto

    2016-04-01

    The Lake of Monate (province of Varese, Northern Italy), is a unique example of ecosystem in equilibrium. The lake water quality is deemed excellent notwithstanding the intensive agricultural cultivation, industrial assets and mining activities characterising the surrounding areas. The lake has a true touristic vocation and is the only swimmable water body of the province of Varese, which counts several natural lakes. Lake of Monate has no tributary and its overall watershed area is equal to c.a. 6.6 km2 including the lake surface (i.e. 2.6 km2), of which 3.3 out of c.a. 4.0 km2 belong to the topographical watershed, while the remaining 0.7 km2 belong to the underground watershed. The latter is larger than the topographical watershed due to the presence of moraine formations on top of the limestone bedrock. The local administration recently promoted an intensive environmental monitoring campaign that aims to reach a better understanding of the hydrology of the lake and the subsurface water fluxes. The monitoring campaign started in October 2013 and, as a result, several meteoclimatic and hydrologic data have been collected up to now at daily and hourly timescales. Our study focuses on a preliminary representation of the hydrological behaviour of the lake through a modified version of HyMOD, a conceptual 5-parameter lumped rainfall-runoff model based on the probability-distributed soil storage capacity. The modified model is a semi-distributed application of HyMOD that uses the same five parameters of the original version and simulates the rainfall-runoff transformation for the whole lake watershed at daily time scale in terms of: direct precipitation on, and evaporation from, the lake surface; overall lake inflow, by separating the runoff component (topographic watershed) from the groundwater component (overall watershed); lake water-level oscillation; streamflow at the lake outlet. We used the first year of hydrometeorological observations as calibration data and

  6. Multi-Trait analysis of growth traits: fitting reduced rank models using principal components for Simmental beef cattle

    Directory of Open Access Journals (Sweden)

    Rodrigo Reis Mota

    2016-09-01

    Full Text Available ABSTRACT: The aim of this research was to evaluate the dimensional reduction of additive direct genetic covariance matrices in genetic evaluations of growth traits (range 100-730 days in Simmental cattle using principal components, as well as to estimate (covariance components and genetic parameters. Principal component analyses were conducted for five different models-one full and four reduced-rank models. Models were compared using Akaike information (AIC and Bayesian information (BIC criteria. Variance components and genetic parameters were estimated by restricted maximum likelihood (REML. The AIC and BIC values were similar among models. This indicated that parsimonious models could be used in genetic evaluations in Simmental cattle. The first principal component explained more than 96% of total variance in both models. Heritability estimates were higher for advanced ages and varied from 0.05 (100 days to 0.30 (730 days. Genetic correlation estimates were similar in both models regardless of magnitude and number of principal components. The first principal component was sufficient to explain almost all genetic variance. Furthermore, genetic parameter similarities and lower computational requirements allowed for parsimonious models in genetic evaluations of growth traits in Simmental cattle.

  7. Tree-Structured Digital Organisms Model

    Science.gov (United States)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  8. Further Examining Berry's Model: The Applicability of Latent Profile Analysis to Acculturation

    Science.gov (United States)

    Fox, Rina S.; Merz, Erin L.; Solórzano, Martha T.; Roesch, Scott C.

    2013-01-01

    This study used latent profile analysis (LPA) to identify acculturation profiles. A three-profile solution fit the data best, and comparisons on demographic and psychosocial outcomes as a function of profile yielded expected results. The findings support using LPA as a parsimonious way to model acculturation without anticipating profiles in…

  9. Structured statistical models of inductive reasoning.

    Science.gov (United States)

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.

  10. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Inferring the Clonal Structure of Viral Populations from Time Series Sequencing.

    Directory of Open Access Journals (Sweden)

    Donatien F Chedom

    2015-11-01

    Full Text Available RNA virus populations will undergo processes of mutation and selection resulting in a mixed population of viral particles. High throughput sequencing of a viral population subsequently contains a mixed signal of the underlying clones. We would like to identify the underlying evolutionary structures. We utilize two sources of information to attempt this; within segment linkage information, and mutation prevalence. We demonstrate that clone haplotypes, their prevalence, and maximum parsimony reticulate evolutionary structures can be identified, although the solutions may not be unique, even for complete sets of information. This is applied to a chain of influenza infection, where we infer evolutionary structures, including reassortment, and demonstrate some of the difficulties of interpretation that arise from deep sequencing due to artifacts such as template switching during PCR amplification.

  12. A Rational Model of the Closed-End Fund Discount

    OpenAIRE

    Jonathan Berk; Richard Stanton

    2004-01-01

    The discount on closed-end funds is widely accepted as proof of investor irrationality. We show,however, that a parsimonious rational model can generate a discount that exhibits many of the characteristics observed in practice. The only required features of the model are that managers have (imperfectly observable) ability to generate excess returns; they sign long-term contracts guaranteeing them a fee each year equal to a fixed fraction of assets under management; and they can leave to earn ...

  13. The Protein Model Portal--a comprehensive resource for protein structure and model information.

    Science.gov (United States)

    Haas, Juergen; Roth, Steven; Arnold, Konstantin; Kiefer, Florian; Schmidt, Tobias; Bordoli, Lorenza; Schwede, Torsten

    2013-01-01

    The Protein Model Portal (PMP) has been developed to foster effective use of 3D molecular models in biomedical research by providing convenient and comprehensive access to structural information for proteins. Both experimental structures and theoretical models for a given protein can be searched simultaneously and analyzed for structural variability. By providing a comprehensive view on structural information, PMP offers the opportunity to apply consistent assessment and validation criteria to the complete set of structural models available for proteins. PMP is an open project so that new methods developed by the community can contribute to PMP, for example, new modeling servers for creating homology models and model quality estimation servers for model validation. The accuracy of participating modeling servers is continuously evaluated by the Continuous Automated Model EvaluatiOn (CAMEO) project. The PMP offers a unique interface to visualize structural coverage of a protein combining both theoretical models and experimental structures, allowing straightforward assessment of the model quality and hence their utility. The portal is updated regularly and actively developed to include latest methods in the field of computational structural biology. Database URL: http://www.proteinmodelportal.org.

  14. The Protein Model Portal—a comprehensive resource for protein structure and model information

    Science.gov (United States)

    Haas, Juergen; Roth, Steven; Arnold, Konstantin; Kiefer, Florian; Schmidt, Tobias; Bordoli, Lorenza; Schwede, Torsten

    2013-01-01

    The Protein Model Portal (PMP) has been developed to foster effective use of 3D molecular models in biomedical research by providing convenient and comprehensive access to structural information for proteins. Both experimental structures and theoretical models for a given protein can be searched simultaneously and analyzed for structural variability. By providing a comprehensive view on structural information, PMP offers the opportunity to apply consistent assessment and validation criteria to the complete set of structural models available for proteins. PMP is an open project so that new methods developed by the community can contribute to PMP, for example, new modeling servers for creating homology models and model quality estimation servers for model validation. The accuracy of participating modeling servers is continuously evaluated by the Continuous Automated Model EvaluatiOn (CAMEO) project. The PMP offers a unique interface to visualize structural coverage of a protein combining both theoretical models and experimental structures, allowing straightforward assessment of the model quality and hence their utility. The portal is updated regularly and actively developed to include latest methods in the field of computational structural biology. Database URL: http://www.proteinmodelportal.org PMID:23624946

  15. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  16. A simple model of bipartite cooperation for ecological and organizational networks.

    Science.gov (United States)

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  17. Evapotranspiration estimation using a parameter-parsimonious energy partition model over Amazon basin

    Science.gov (United States)

    Xu, D.; Agee, E.; Wang, J.; Ivanov, V. Y.

    2017-12-01

    The increased frequency and severity of droughts in the Amazon region have emphasized the potential vulnerability of the rainforests to heat and drought-induced stresses, highlighting the need to reduce the uncertainty in estimates of regional evapotranspiration (ET) and quantify resilience of the forest. Ground-based observations for estimating ET are resource intensive, making methods based on remotely sensed observations an attractive alternative. Several methodologies have been developed to estimate ET from satellite data, but challenges remained in model parameterization and satellite limited coverage reducing their utility for monitoring biodiverse regions. In this work, we apply a novel surface energy partition method (Maximum Entropy Production; MEP) based on Bayesian probability theory and nonequilibrium thermodynamics to derive ET time series using satellite data for Amazon basin. For a large, sparsely monitored region such as the Amazon, this approach has the advantage methods of only using single level measurements of net radiation, temperature, and specific humidity data. Furthermore, it is not sensitive to the uncertainty of the input data and model parameters. In this first application of MEP theory for a tropical forest biome, we assess its performance at various spatiotemporal scales against a diverse field data sets. Specifically, the objective of this work is to test this method using eddy flux data for several locations across the Amazonia at sub-daily, monthly, and annual scales and compare the new estimates with those using traditional methods. Analyses of the derived ET time series will contribute to reducing the current knowledge gap surrounding the much debated response of the Amazon Basin region to droughts and offer a template for monitoring the long-term changes in global hydrologic cycle due to anthropogenic and natural causes.

  18. Metallic glasses: structural models

    International Nuclear Information System (INIS)

    Nassif, E.

    1984-01-01

    The aim of this work is to give a summary of the attempts made up to the present in order to discribe by structural models the atomic arrangement in metallic glasses, showing also why the structure factors and atomic distribution functions cannot be always experimentally determined with a reasonable accuracy. (M.W.O.) [pt

  19. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  20. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  1. Genetic Programming for Automatic Hydrological Modelling

    Science.gov (United States)

    Chadalawada, Jayashree; Babovic, Vladan

    2017-04-01

    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach

  2. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  3. Working Memory and Decision-Making in a Frontoparietal Circuit Model.

    Science.gov (United States)

    Murray, John D; Jaramillo, Jorge; Wang, Xiao-Jing

    2017-12-13

    Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental "building blocks" of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal

  4. Structural Equation Model Trees

    Science.gov (United States)

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  5. New insights into the endophenotypic status of cognition in bipolar disorder: genetic modelling study of twins and siblings.

    Science.gov (United States)

    Georgiades, Anna; Rijsdijk, Fruhling; Kane, Fergus; Rebollo-Mesa, Irene; Kalidindi, Sridevi; Schulze, Katja K; Stahl, Daniel; Walshe, Muriel; Sahakian, Barbara J; McDonald, Colm; Hall, Mei-Hua; Murray, Robin M; Kravariti, Eugenia

    2016-06-01

    Twin studies have lacked statistical power to apply advanced genetic modelling techniques to the search for cognitive endophenotypes for bipolar disorder. To quantify the shared genetic variability between bipolar disorder and cognitive measures. Structural equation modelling was performed on cognitive data collected from 331 twins/siblings of varying genetic relatedness, disease status and concordance for bipolar disorder. Using a parsimonious AE model, verbal episodic and spatial working memory showed statistically significant genetic correlations with bipolar disorder (rg = |0.23|-|0.27|), which lost statistical significance after covarying for affective symptoms. Using an ACE model, IQ and visual-spatial learning showed statistically significant genetic correlations with bipolar disorder (rg = |0.51|-|1.00|), which remained significant after covarying for affective symptoms. Verbal episodic and spatial working memory capture a modest fraction of the bipolar diathesis. IQ and visual-spatial learning may tap into genetic substrates of non-affective symptomatology in bipolar disorder. © The Royal College of Psychiatrists 2016.

  6. A Teaching Model for Truss Structures

    Science.gov (United States)

    Bigoni, Davide; Dal Corso, Francesco; Misseroni, Diego; Tommasini, Mirko

    2012-01-01

    A classroom demonstration model has been designed, machined and successfully tested in different learning environments to facilitate understanding of the mechanics of truss structures, in which struts are subject to purely axial load and deformation. Gaining confidence with these structures is crucial for the development of lattice models, which…

  7. Exploring RNA structure by integrative molecular modelling

    DEFF Research Database (Denmark)

    Masquida, Benoît; Beckert, Bertrand; Jossinet, Fabrice

    2010-01-01

    RNA molecular modelling is adequate to rapidly tackle the structure of RNA molecules. With new structured RNAs constituting a central class of cellular regulators discovered every year, the need for swift and reliable modelling methods is more crucial than ever. The pragmatic method based...... on interactive all-atom molecular modelling relies on the observation that specific structural motifs are recurrently found in RNA sequences. Once identified by a combination of comparative sequence analysis and biochemical data, the motifs composing the secondary structure of a given RNA can be extruded...

  8. Antibody structural modeling with prediction of immunoglobulin structure (PIGS)

    KAUST Repository

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna; Tramontano, Anna

    2014-01-01

    of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (~10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody

  9. Stability patterns for a size-structured population model and its stage-structured counterpart

    DEFF Research Database (Denmark)

    Zhang, Lai; Pedersen, Michael; Lin, Zhigui

    2015-01-01

    In this paper we compare a general size-structured population model, where a size-structured consumer feeds upon an unstructured resource, to its simplified stage-structured counterpart in terms of equilibrium stability. Stability of the size-structured model is understood in terms of an equivale...... to the population level....

  10. Avoidable errors in deposited macromolecular structures: an impediment to efficient data mining

    Directory of Open Access Journals (Sweden)

    Zbigniew Dauter

    2014-05-01

    Full Text Available Whereas the vast majority of the more than 85 000 crystal structures of macromolecules currently deposited in the Protein Data Bank are of high quality, some suffer from a variety of imperfections. Although this fact has been pointed out in the past, it is still worth periodic updates so that the metadata obtained by global analysis of the available crystal structures, as well as the utilization of the individual structures for tasks such as drug design, should be based on only the most reliable data. Here, selected abnormal deposited structures have been analysed based on the Bayesian reasoning that the correctness of a model must be judged against both the primary evidence as well as prior knowledge. These structures, as well as information gained from the corresponding publications (if available, have emphasized some of the most prevalent types of common problems. The errors are often perfect illustrations of the nature of human cognition, which is frequently influenced by preconceptions that may lead to fanciful results in the absence of proper validation. Common errors can be traced to negligence and a lack of rigorous verification of the models against electron density, creation of non-parsimonious models, generation of improbable numbers, application of incorrect symmetry, illogical presentation of the results, or violation of the rules of chemistry and physics. Paying more attention to such problems, not only in the final validation stages but during the structure-determination process as well, is necessary not only in order to maintain the highest possible quality of the structural repositories and databases but most of all to provide a solid basis for subsequent studies, including large-scale data-mining projects. For many scientists PDB deposition is a rather infrequent event, so the need for proper training and supervision is emphasized, as well as the need for constant alertness of reason and critical judgment as absolutely

  11. Model reduction in integrated controls-structures design

    Science.gov (United States)

    Maghami, Peiman G.

    1993-01-01

    It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.

  12. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  13. The US business cycle: power law scaling for interacting units with complex internal structure

    Science.gov (United States)

    Ormerod, Paul

    2002-11-01

    In the social sciences, there is increasing evidence of the existence of power law distributions. The distribution of recessions in capitalist economies has recently been shown to follow such a distribution. The preferred explanation for this is self-organised criticality. Gene Stanley and colleagues propose an alternative, namely that power law scaling can arise from the interplay between random multiplicative growth and the complex structure of the units composing the system. This paper offers a parsimonious model of the US business cycle based on similar principles. The business cycle, along with long-term growth, is one of the two features which distinguishes capitalism from all previously existing societies. Yet, economics lacks a satisfactory theory of the cycle. The source of cycles is posited in economic theory to be a series of random shocks which are external to the system. In this model, the cycle is an internal feature of the system, arising from the level of industrial concentration of the agents and the interactions between them. The model-in contrast to existing economic theories of the cycle-accounts for the key features of output growth in the US business cycle in the 20th century.

  14. Models of Thinking, Learning, and Teaching in Games

    OpenAIRE

    Colin Camerer; Teck Ho; Kuan Chong

    2003-01-01

    Noncooperative game theory combines strategic thinking, best-response, and mutual consistency of beliefs and choices (equilibrium). Hundreds of experiments show that in actual behavior these three forces are limited, even when subjects are highly motivated and analytically skilled (Camerer, 2003). The challenge is to create models that are as general, precise, and parsimonious as equilibrium, but which also use cognitive details to explain experimental evidence more accurately and to predict ...

  15. On modeling of structured multiphase mixtures

    International Nuclear Information System (INIS)

    Dobran, F.

    1987-01-01

    The usual modeling of multiphase mixtures involves a set of conservation and balance equations of mass, momentum, energy and entropy (the basic set) constructed by an averaging procedure or postulated. The averaged models are constructed by averaging, over space or time segments, the local macroscopic field equations of each phase, whereas the postulated models are usually motivated by the single phase multicomponent mixture models. In both situations, the resulting equations yield superimposed continua models and are closed by the constitutive equations which place restrictions on the possible material response during the motion and phase change. In modeling the structured multiphase mixtures, the modeling of intrinsic motion of grains or particles is accomplished by adjoining to the basic set of field equations the additional balance equations, thereby placing restrictions on the motion of phases only within the imposed extrinsic and intrinsic sources. The use of the additional balance equations has been primarily advocated in the postulatory theories of multiphase mixtures and are usually derived through very special assumptions of the material deformation. Nevertheless, the resulting mixture models can predict a wide variety of complex phenomena such as the Mohr-Coulomb yield criterion in granular media, Rayleigh bubble equation, wave dispersion and dilatancy. Fundamental to the construction of structured models of multiphase mixtures are the problems pertaining to the existence and number of additional balance equations to model the structural characteristics of a mixture. Utilizing a volume averaging procedure it is possible not only to derive the basic set of field equation discussed above, but also a very general set of additional balance equations for modeling of structural properties of the mixture

  16. An empirical comparison of alternate regime-switching models for electricity spot prices

    Energy Technology Data Exchange (ETDEWEB)

    Janczura, Joanna [Hugo Steinhaus Center, Institute of Mathematics and Computer Science, Wroclaw University of Technology, 50-370 Wroclaw (Poland); Weron, Rafal [Institute of Organization and Management, Wroclaw University of Technology, 50-370 Wroclaw (Poland)

    2010-09-15

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  17. An empirical comparison of alternate regime-switching models for electricity spot prices

    International Nuclear Information System (INIS)

    Janczura, Joanna; Weron, Rafal

    2010-01-01

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  18. Global model structures for ∗-modules

    DEFF Research Database (Denmark)

    Böhme, Benjamin

    2018-01-01

    We extend Schwede's work on the unstable global homotopy theory of orthogonal spaces and L-spaces to the category of ∗-modules (i.e., unstable S-modules). We prove a theorem which transports model structures and their properties from L-spaces to ∗-modules and show that the resulting global model...... structure for ∗-modules is monoidally Quillen equivalent to that of orthogonal spaces. As a consequence, there are induced Quillen equivalences between the associated model categories of monoids, which identify equivalent models for the global homotopy theory of A∞-spaces....

  19. Modeling protein structures: construction and their applications.

    Science.gov (United States)

    Ring, C S; Cohen, F E

    1993-06-01

    Although no general solution to the protein folding problem exists, the three-dimensional structures of proteins are being successfully predicted when experimentally derived constraints are used in conjunction with heuristic methods. In the case of interleukin-4, mutagenesis data and CD spectroscopy were instrumental in the accurate assignment of secondary structure. In addition, the tertiary structure was highly constrained by six cysteines separated by many residues that formed three disulfide bridges. Although the correct structure was a member of a short list of plausible structures, the "best" structure was the topological enantiomer of the experimentally determined conformation. For many proteases, other experimentally derived structures can be used as templates to identify the secondary structure elements. In a procedure called modeling by homology, the structure of a known protein is used as a scaffold to predict the structure of another related protein. This method has been used to model a serine and a cysteine protease that are important in the schistosome and malarial life cycles, respectively. The model structures were then used to identify putative small molecule enzyme inhibitors computationally. Experiments confirm that some of these nonpeptidic compounds are active at concentrations of less than 10 microM.

  20. Development of the tube bundle structure for fluid-structure interaction analysis model - Intermediate Report -

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kim, Jae Yong; Lee, Kang Hee; Lee, Young Ho; Kim, Hyung Kyu

    2009-07-01

    Tube bundle structures within a Boiler or heat exchanger are laid the fluid-structure, thermal-structure and fluid-thermal-structure coupled boundary condition. In these complicated boundary conditions, Fluid-structure interaction (FSI) occurs when fluid flow causes deformation of the structure. This deformation, in turn, changes the boundary conditions for the fluid flow. The structural analysis have been executed as follows. First of all, divide the fluid and structural analysis discipline, and then independently analyzed each other. However, the fluid dynamic force effect the behavior of the structure, and the vibration amplitude of the structure to fluid. FSI analysis model was separately created fluid and structure model, and then defined the fsi boundary condition, and simultaneously analyzed in one domain. The analysis results were compared with those of the experimental method for validating the analysis model. Flow-induced vibration test was executed with single rod configuration. The vibration amplitudes of a fuel rod were measured by the laser vibro-meter system in x and y-direction. The analyses results were not closely with the test data, but the trend was very similar with the test result. In fsi coupled analysis case, the turbulent model was very important with the reliability of the accuracy of the analysis model. Therefore, the analysis model will be needed to further study

  1. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  2. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  3. MMM: A toolbox for integrative structure modeling.

    Science.gov (United States)

    Jeschke, Gunnar

    2018-01-01

    Structural characterization of proteins and their complexes may require integration of restraints from various experimental techniques. MMM (Multiscale Modeling of Macromolecules) is a Matlab-based open-source modeling toolbox for this purpose with a particular emphasis on distance distribution restraints obtained from electron paramagnetic resonance experiments on spin-labelled proteins and nucleic acids and their combination with atomistic structures of domains or whole protomers, small-angle scattering data, secondary structure information, homology information, and elastic network models. MMM does not only integrate various types of restraints, but also various existing modeling tools by providing a common graphical user interface to them. The types of restraints that can support such modeling and the available model types are illustrated by recent application examples. © 2017 The Protein Society.

  4. Track structure in biological models.

    Science.gov (United States)

    Curtis, S B

    1986-01-01

    High-energy heavy ions in the galactic cosmic radiation (HZE particles) may pose a special risk during long term manned space flights outside the sheltering confines of the earth's geomagnetic field. These particles are highly ionizing, and they and their nuclear secondaries can penetrate many centimeters of body tissue. The three dimensional patterns of ionizations they create as they lose energy are referred to as their track structure. Several models of biological action on mammalian cells attempt to treat track structure or related quantities in their formulation. The methods by which they do this are reviewed. The proximity function is introduced in connection with the theory of Dual Radiation Action (DRA). The ion-gamma kill (IGK) model introduces the radial energy-density distribution, which is a smooth function characterizing both the magnitude and extension of a charged particle track. The lethal, potentially lethal (LPL) model introduces lambda, the mean distance between relevant ion clusters or biochemical species along the track. Since very localized energy depositions (within approximately 10 nm) are emphasized, the proximity function as defined in the DRA model is not of utility in characterizing track structure in the LPL formulation.

  5. An evolving network model with community structure

    International Nuclear Information System (INIS)

    Li Chunguang; Maini, Philip K

    2005-01-01

    Many social and biological networks consist of communities-groups of nodes within which connections are dense, but between which connections are sparser. Recently, there has been considerable interest in designing algorithms for detecting community structures in real-world complex networks. In this paper, we propose an evolving network model which exhibits community structure. The network model is based on the inner-community preferential attachment and inter-community preferential attachment mechanisms. The degree distributions of this network model are analysed based on a mean-field method. Theoretical results and numerical simulations indicate that this network model has community structure and scale-free properties

  6. Temporal structures in shell models

    DEFF Research Database (Denmark)

    Okkels, F.

    2001-01-01

    The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...

  7. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  8. The feeding practices and structure questionnaire: construction and initial validation in a sample of Australian first-time mothers and their 2-year olds.

    Science.gov (United States)

    Jansen, Elena; Mallan, Kimberley M; Nicholson, Jan M; Daniels, Lynne A

    2014-06-04

    Early feeding practices lay the foundation for children's eating habits and weight gain. Questionnaires are available to assess parental feeding but overlapping and inconsistent items, subscales and terminology limit conceptual clarity and between study comparisons. Our aim was to consolidate a range of existing items into a parsimonious and conceptually robust questionnaire for assessing feeding practices with very young children (review 10 factors were specified. Of these, 9 factors (40 items) showed acceptable model fit and internal reliability (Cronbach's α: 0.61-0.89). Four factors reflected non-responsive feeding practices: 'Distrust in Appetite', 'Reward for Behaviour', 'Reward for Eating', and 'Persuasive Feeding'. Five factors reflected structure of the meal environment and limits: 'Structured Meal Setting', 'Structured Meal Timing', 'Family Meal Setting', 'Overt Restriction' and 'Covert Restriction'. Feeding practices generally showed the expected pattern of associations with child eating behaviours but none with weight. The Feeding Practices and Structure Questionnaire (FPSQ) provides a new reliable and valid measure of parental feeding practices, specifically maternal responsiveness to children's hunger/satiety signals facilitated by routine and structure in feeding. Further validation in more diverse samples is required.

  9. Linking Metatraits of the Big Five to Well-Being and Ill-Being: Do Basic Psychological Needs Matter?

    Science.gov (United States)

    Simsek, Omer Faruk; Koydemir, Selda

    2013-01-01

    There is considerable evidence that two higher order factors underlie the Big-Five dimensions and that these two factors provide a parsimonious taxonomy. However, not much empirical evidence has been documented as to the extent to which these traits relate to certain psychological constructs. In this study, we tested a structural model to…

  10. Hybrid modeling of microbial exopolysaccharide (EPS) production: The case of Enterobacter A47.

    Science.gov (United States)

    Marques, Rodolfo; von Stosch, Moritz; Portela, Rui M C; Torres, Cristiana A V; Antunes, Sílvia; Freitas, Filomena; Reis, Maria A M; Oliveira, Rui

    2017-03-20

    Enterobacter A47 is a bacterium that produces high amounts of a fucose-rich exopolysaccharide (EPS) from glycerol residue of the biodiesel industry. The fed-batch process is characterized by complex non-linear dynamics with highly viscous pseudo-plastic rheology due to the accumulation of EPS in the culture medium. In this paper, we study hybrid modeling as a methodology to increase the predictive power of models for EPS production optimization. We compare six hybrid structures that explore different levels of knowledge-based and machine-learning model components. Knowledge-based components consist of macroscopic material balances, Monod type kinetics, cardinal temperature and pH (CTP) dependency and power-law viscosity models. Unknown dependencies are set to be identified by a feedforward artificial neural network (ANN). A semiparametric identification schema is applied resorting to a data set of 13 independent fed-batch experiments. A parsimonious hybrid model was identified that describes the dynamics of the 13 experiments with the same parameterization. The final model is specific to Enterobacter A47 but can be easily extended to other microbial EPS processes. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Probabilistic Modeling of Timber Structures

    DEFF Research Database (Denmark)

    Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2005-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...... proposal is based on discussions and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for components and connections. The recommended...

  12. 3D-DART: a DNA structure modelling server

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2009-01-01

    There is a growing interest in structural studies of DNA by both experimental and computational approaches. Often, 3D-structural models of DNA are required, for instance, to serve as templates for homology modeling, as starting structures for macro-molecular docking or as scaffold for NMR structure

  13. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  14. Remote sensing approach to structural modelling

    International Nuclear Information System (INIS)

    El Ghawaby, M.A.

    1989-01-01

    Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs

  15. A first course in structural equation modeling

    CERN Document Server

    Raykov, Tenko

    2012-01-01

    In this book, authors Tenko Raykov and George A. Marcoulides introduce students to the basics of structural equation modeling (SEM) through a conceptual, nonmathematical approach. For ease of understanding, the few mathematical formulas presented are used in a conceptual or illustrative nature, rather than a computational one.Featuring examples from EQS, LISREL, and Mplus, A First Course in Structural Equation Modeling is an excellent beginner's guide to learning how to set up input files to fit the most commonly used types of structural equation models with these programs. The basic ideas and methods for conducting SEM are independent of any particular software.Highlights of the Second Edition include: Review of latent change (growth) analysis models at an introductory level Coverage of the popular Mplus program Updated examples of LISREL and EQS A CD that contains all of the text's LISREL, EQS, and Mplus examples.A First Course in Structural Equation Modeling is intended as an introductory book for students...

  16. A spatial structural derivative model for ultraslow diffusion

    Directory of Open Access Journals (Sweden)

    Xu Wei

    2017-01-01

    Full Text Available This study investigates the ultraslow diffusion by a spatial structural derivative, in which the exponential function ex is selected as the structural function to construct the local structural derivative diffusion equation model. The analytical solution of the diffusion equation is a form of Biexponential distribution. Its corresponding mean squared displacement is numerically calculated, and increases more slowly than the logarithmic function of time. The local structural derivative diffusion equation with the structural function ex in space is an alternative physical and mathematical modeling model to characterize a kind of ultraslow diffusion.

  17. Statistical Analysis and Modelling of Olkiluoto Structures

    International Nuclear Information System (INIS)

    Hellae, P.; Vaittinen, T.; Saksa, P.; Nummela, J.

    2004-11-01

    Posiva Oy is carrying out investigations for the disposal of the spent nuclear fuel at the Olkiluoto site in SW Finland. The investigations have focused on the central part of the island. The layout design of the entire repository requires characterization of notably larger areas and must rely at least at the current stage on borehole information from a rather sparse network and on the geophysical soundings providing information outside and between the holes. In this work, the structural data according to the current version of the Olkiluoto bedrock model is analyzed. The bedrock model relies much on the borehole data although results of the seismic surveys and, for example, pumping tests are used in determining the orientation and continuation of the structures. Especially in the analysis, questions related to the frequency of structures and size of the structures are discussed. The structures observed in the boreholes are mainly dipping gently to the southeast. About 9 % of the sample length belongs to structures. The proportion is higher in the upper parts of the rock. The number of fracture and crushed zones seems not to depend greatly on the depth, whereas the hydraulic features concentrate on the depth range above -100 m. Below level -300 m, the hydraulic conductivity occurs in connection of fractured zones. Especially the hydraulic features, but also fracture and crushed zones often occur in groups. The frequency of the structure (area of structures per total volume) is estimated to be of the order of 1/100m. The size of the local structures was estimated by calculating the intersection of the zone to the nearest borehole where the zone has not been detected. Stochastic models using the Fracman software by Golder Associates were generated based on the bedrock model data complemented with the magnetic ground survey data. The seismic surveys (from boreholes KR5, KR13, KR14, and KR19) were used as alternative input data. The generated models were tested by

  18. Model Servqual Dengan Pendekatan Structural Equation Modeling (Studi Pada Mahasiswa Sistem Informasi)

    OpenAIRE

    Nurfaizal, Yusmedi

    2015-01-01

    Penelitian ini berjudul “MODEL SERVQUAL DENGAN PENDEKATAN STRUCTURAL EQUATION MODELING (Studi Pada Mahasiswa Sistem Informasi)”. Tujuan penelitian ini adalah untuk mengetahui model Servqual dengan pendekatan Structural Equation Modeling pada mahasiswa sistem informasi. Peneliti memutuskan untuk mengambil sampel sebanyak 100 responden. Untuk menguji model digunakan analisis SEM. Hasil penelitian menunjukkan bahwa tangibility, reliability responsiveness, assurance dan emphaty mempunyai pengaruh...

  19. Structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions of an isoscalar nuclear target for the deep inelastic scattering by leptons in an extended version of the chiral bag model which incorporates the qanti q structure of the pions in the cloud. Bjorken scaling and Regge behavior are satisfied. The model calculation reproduces the low-x behavior of the data but fails to explain the medium- to large-x behavior. Evolution of the quark structure functions seem inevitable to attempt a connection between the low-energy models and the high-energy behavior of quantum chromodynamics. (orig.)

  20. Structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V.; Vento, V.

    1989-07-13

    We calculate the structure functions of an isoscalar nuclear target for the deep inelastic scattering by leptons in an extended version of the chiral bag model which incorporates the qanti q structure of the pions in the cloud. Bjorken scaling and Regge behavior are satisfied. The model calculation reproduces the low-x behavior of the data but fails to explain the medium- to large-x behavior. Evolution of the quark structure functions seem inevitable to attempt a connection between the low-energy models and the high-energy behavior of quantum chromodynamics. (orig.).

  1. Emulating a flexible space structure: Modeling

    Science.gov (United States)

    Waites, H. B.; Rice, S. C.; Jones, V. L.

    1988-01-01

    Control Dynamics, in conjunction with Marshall Space Flight Center, has participated in the modeling and testing of Flexible Space Structures. Through the series of configurations tested and the many techniques used for collecting, analyzing, and modeling the data, many valuable insights have been gained and important lessons learned. This paper discusses the background of the Large Space Structure program, Control Dynamics' involvement in testing and modeling of the configurations (especially the Active Control Technique Evaluation for Spacecraft (ACES) configuration), the results from these two processes, and insights gained from this work.

  2. Scalable rule-based modelling of allosteric proteins and biochemical networks.

    Directory of Open Access Journals (Sweden)

    Julien F Ollivier

    2010-11-01

    Full Text Available Much of the complexity of biochemical networks comes from the information-processing abilities of allosteric proteins, be they receptors, ion-channels, signalling molecules or transcription factors. An allosteric protein can be uniquely regulated by each combination of input molecules that it binds. This "regulatory complexity" causes a combinatorial increase in the number of parameters required to fit experimental data as the number of protein interactions increases. It therefore challenges the creation, updating, and re-use of biochemical models. Here, we propose a rule-based modelling framework that exploits the intrinsic modularity of protein structure to address regulatory complexity. Rather than treating proteins as "black boxes", we model their hierarchical structure and, as conformational changes, internal dynamics. By modelling the regulation of allosteric proteins through these conformational changes, we often decrease the number of parameters required to fit data, and so reduce over-fitting and improve the predictive power of a model. Our method is thermodynamically grounded, imposes detailed balance, and also includes molecular cross-talk and the background activity of enzymes. We use our Allosteric Network Compiler to examine how allostery can facilitate macromolecular assembly and how competitive ligands can change the observed cooperativity of an allosteric protein. We also develop a parsimonious model of G protein-coupled receptors that explains functional selectivity and can predict the rank order of potency of agonists acting through a receptor. Our methodology should provide a basis for scalable, modular and executable modelling of biochemical networks in systems and synthetic biology.

  3. Structure-Based Turbulence Model

    National Research Council Canada - National Science Library

    Reynolds, W

    2000-01-01

    .... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...

  4. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  5. Modeling and control of flexible space structures

    Science.gov (United States)

    Wie, B.; Bryson, A. E., Jr.

    1981-01-01

    The effects of actuator and sensor locations on transfer function zeros are investigated, using uniform bars and beams as generic models of flexible space structures. It is shown how finite element codes may be used directly to calculate transfer function zeros. The impulse response predicted by finite-dimensional models is compared with the exact impulse response predicted by the infinite dimensional models. It is shown that some flexible structures behave as if there were a direct transmission between actuator and sensor (equal numbers of zeros and poles in the transfer function). Finally, natural damping models for a vibrating beam are investigated since natural damping has a strong influence on the appropriate active control logic for a flexible structure.

  6. Structured population models in biology and epidemiology

    CERN Document Server

    Ruan, Shigui

    2008-01-01

    This book consists of six chapters written by leading researchers in mathematical biology. These chapters present recent and important developments in the study of structured population models in biology and epidemiology. Topics include population models structured by age, size, and spatial position; size-structured models for metapopulations, macroparasitc diseases, and prion proliferation; models for transmission of microparasites between host populations living on non-coincident spatial domains; spatiotemporal patterns of disease spread; method of aggregation of variables in population dynamics; and biofilm models. It is suitable as a textbook for a mathematical biology course or a summer school at the advanced undergraduate and graduate level. It can also serve as a reference book for researchers looking for either interesting and specific problems to work on or useful techniques and discussions of some particular problems.

  7. Fatgraph models of RNA structure

    Directory of Open Access Journals (Sweden)

    Huang Fenix

    2017-01-01

    Full Text Available In this review paper we discuss fatgraphs as a conceptual framework for RNA structures. We discuss various notions of coarse-grained RNA structures and relate them to fatgraphs.We motivate and discuss the main intuition behind the fatgraph model and showcase its applicability to canonical as well as noncanonical base pairs. Recent discoveries regarding novel recursions of pseudoknotted (pk configurations as well as their translation into context-free grammars for pk-structures are discussed. This is shown to allow for extending the concept of partition functions of sequences w.r.t. a fixed structure having non-crossing arcs to pk-structures. We discuss minimum free energy folding of pk-structures and combine these above results outlining how to obtain an inverse folding algorithm for PK structures.

  8. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  9. Animal Modeling and Neurocircuitry of Dual Diagnosis

    Science.gov (United States)

    Chambers, R. Andrew

    2010-01-01

    Dual diagnosis is a problem of tremendous depth and scope, spanning many classes of mental disorders and addictive drugs. Animal models of psychiatric disorders studied in addiction paradigms suggest a unitary nature of mental illness and addiction vulnerability both on the neurocircuit and clinical-behavioral levels. These models provide platforms for exploring the interactive roles of biological, environmental and developmental factors on neurocircuits commonly involved in psychiatric and addiction diseases. While suggestive of the artifice of segregated research, training, and clinical cultures between psychiatric and addiction fields, this research may lead to more parsimonious, integrative and preventative treatments for dual diagnosis. PMID:20585464

  10. Residual Structures in Latent Growth Curve Modeling

    Science.gov (United States)

    Grimm, Kevin J.; Widaman, Keith F.

    2010-01-01

    Several alternatives are available for specifying the residual structure in latent growth curve modeling. Two specifications involve uncorrelated residuals and represent the most commonly used residual structures. The first, building on repeated measures analysis of variance and common specifications in multilevel models, forces residual variances…

  11. Finite element modeling of nanotube structures linear and non-linear models

    CERN Document Server

    Awang, Mokhtar; Muhammad, Ibrahim Dauda

    2016-01-01

    This book presents a new approach to modeling carbon structures such as graphene and carbon nanotubes using finite element methods, and addresses the latest advances in numerical studies for these materials. Based on the available findings, the book develops an effective finite element approach for modeling the structure and the deformation of grapheme-based materials. Further, modeling processing for single-walled and multi-walled carbon nanotubes is demonstrated in detail.

  12. Optimization of mathematical models for soil structure interaction

    International Nuclear Information System (INIS)

    Vallenas, J.M.; Wong, C.K.; Wong, D.L.

    1993-01-01

    Accounting for soil-structure interaction in the design and analysis of major structures for DOE facilities can involve significant costs in terms of modeling and computer time. Using computer programs like SASSI for modeling major structures, especially buried structures, requires the use of models with a large number of soil-structure interaction nodes. The computer time requirements (and costs) increase as a function of the number of interaction nodes to the third power. The added computer and labor cost for data manipulation and post-processing can further increase the total cost. This paper provides a methodology to significantly reduce the number of interaction nodes. This is achieved by selectively increasing the thickness of soil layers modeled based on the need for the mathematical model to capture as input only those frequencies that can actually be transmitted by the soil media. The authors have rarely found that a model needs to capture frequencies as high as 33 Hz. Typically coarser meshes (and a lesser number of interaction nodes) are adequate

  13. Model-implementation fidelity in cyber physical system design

    CERN Document Server

    Fabre, Christian

    2017-01-01

    This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques. Addresses CPS design problems such as cross-application interference, parsimonious modeling, and trustful code production Describes solutions, such as simulation for extra-functional properties, extension of cod...

  14. Bifactor Models Show a Superior Model Fit: Examination of the Factorial Validity of Parent-Reported and Self-Reported Symptoms of Attention-Deficit/Hyperactivity Disorders in Children and Adolescents.

    Science.gov (United States)

    Rodenacker, Klaas; Hautmann, Christopher; Görtz-Dorten, Anja; Döpfner, Manfred

    2016-01-01

    Various studies have demonstrated that bifactor models yield better solutions than models with correlated factors. However, the kind of bifactor model that is most appropriate is yet to be examined. The current study is the first to test bifactor models across the full age range (11-18 years) of adolescents using self-reports, and the first to test bifactor models with German subjects and German questionnaires. The study sample included children and adolescents aged between 6 and 18 years recruited from a German clinical sample (n = 1,081) and a German community sample (n = 642). To examine the factorial validity, we compared unidimensional, correlated factors and higher-order and bifactor models and further tested a modified incomplete bifactor model for measurement invariance. Bifactor models displayed superior model fit statistics compared to correlated factor models or second-order models. However, a more parsimonious incomplete bifactor model with only 2 specific factors (inattention and impulsivity) showed a good model fit and a better factor structure than the other bifactor models. Scalar measurement invariance was given in most group comparisons. An incomplete bifactor model would suggest that the specific inattention and impulsivity factors represent entities separable from the general attention-deficit/hyperactivity disorder construct and might, therefore, give way to a new approach to subtyping of children beyond and above attention-deficit/hyperactivity disorder. © 2016 S. Karger AG, Basel.

  15. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  16. Structural Identifiability of Dynamic Systems Biology Models.

    Science.gov (United States)

    Villaverde, Alejandro F; Barreiro, Antonio; Papachristodoulou, Antonis

    2016-10-01

    A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas.

  17. Measuring Financial Cycles in a Model-Based Analysis: Empirical Evidence for the United States and the Euro Area

    NARCIS (Netherlands)

    Galati, E.B.G.; Hindrayanto, A.I.W.; Koopman, S.J.; Vlekke, M.

    2016-01-01

    We adopt an unobserved components time series model to extract financial cycles for the United States and the five largest euro area countries over the period 1970-2014. We find that financial cycles can parsimoniously be estimated by house prices and total credit or the credit-to-GDP ratio. We show

  18. Structural classification and a binary structure model for superconductors

    Institute of Scientific and Technical Information of China (English)

    Dong Cheng

    2006-01-01

    Based on structural and bonding features, a new classification scheme of superconductors is proposed to classify conductors can be partitioned into two parts, a superconducting active component and a supplementary component.Partially metallic covalent bonding is found to be a common feature in all superconducting active components, and the electron states of the atoms in the active components usually make a dominant contribution to the energy band near the Fermi surface. Possible directions to explore new superconductors are discussed based on the structural classification and the binary structure model.

  19. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    Science.gov (United States)

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  20. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    Science.gov (United States)

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  1. Structural modeling for multicell composite rotor blades

    Science.gov (United States)

    Rehfield, Lawrence W.; Atilgan, Ali R.

    1987-01-01

    Composite material systems are currently good candidates for aerospace structures, primarily for the design flexibility they offer, i.e., it is possible to tailor the material and manufacturing approach to the application. A working definition of elastic or structural tailoring is the use of structural concept, fiber orientation, ply stacking sequence, and a blend of materials to achieve specific performance goals. In the design process, choices of materials and dimensions are made which produce specific response characteristics, and which permit the selected goals to be achieved. Common choices for tailoring goals are preventing instabilities or vibration resonances or enhancing damage tolerance. An essential, enabling factor in the design of tailored composite structures is structural modeling that accurately, but simply, characterizes response. The objective of this paper is to present a new multicell beam model for composite rotor blades and to validate predictions based on the new model by comparison with a finite element simulation in three benchmark static load cases.

  2. Quality of Life of Family Caregivers of Children with Autism: The Mother's Perspective

    Science.gov (United States)

    Shu, Bih-Ching

    2009-01-01

    The purpose of this study was to explore the relationship between the quality of life (QOL) and feeling of mothers of a child with autism. The QOL instrument was also used. A total of 104 participants completed all questionnaires, which included the Taiwan version of the WHOQOL-BREF. A final robust parsimonious structural model showed a positive…

  3. Fluid-structure interaction and structural analyses using a comprehensive mitral valve model with 3D chordal structure.

    Science.gov (United States)

    Toma, Milan; Einstein, Daniel R; Bloodworth, Charles H; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2017-04-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be 'invisible' to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. The Box-Cox power transformation on nursing sensitive indicators: does it matter if structural effects are omitted during the estimation of the transformation parameter?

    Science.gov (United States)

    Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy

    2011-08-19

    Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  5. The Box-Cox power transformation on nursing sensitive indicators: Does it matter if structural effects are omitted during the estimation of the transformation parameter?

    Directory of Open Access Journals (Sweden)

    Gajewski Byron J

    2011-08-01

    Full Text Available Abstract Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a estimating the transformation parameter along with factors with potential structural effects, and b estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  6. House price responsiveness of housing investments across major European economies

    OpenAIRE

    Gattini, Luca; Ganoulis, Ioannis

    2012-01-01

    In comparison with the large literature on house prices, housing investments have been studied far less. This paper investigates the behaviour of private residential investments for the six largest European economies, namely: Germany, France, Italy, Spain, the Netherlands and the United Kingdom. It employs a common modelling structure based on an error correction approach and country specific models. First, co-integration among the parsimoniously specified set of fundamental variables is dete...

  7. Time series modelling of overflow structures

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.

    1997-01-01

    The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...... and available storage capacity in the pipe as well as predictions of future states. A linear overflow relation is found, differing significantly from the traditional modelling approach. This is due to complicated overflow structures in a hydraulic sense where the overflow is governed by inertia from the inflow...... to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published...

  8. Online Semiparametric Identification of Lithium-Ion Batteries Using the Wavelet-Based Partially Linear Battery Model

    Directory of Open Access Journals (Sweden)

    Caiping Zhang

    2013-05-01

    Full Text Available Battery model identification is very important for reliable battery management as well as for battery system design process. The common problem in identifying battery models is how to determine the most appropriate mathematical model structure and parameterized coefficients based on the measured terminal voltage and current. This paper proposes a novel semiparametric approach using the wavelet-based partially linear battery model (PLBM and a recursive penalized wavelet estimator for online battery model identification. Three main contributions are presented. First, the semiparametric PLBM is proposed to simulate the battery dynamics. Compared with conventional electrical models of a battery, the proposed PLBM is equipped with a semiparametric partially linear structure, which includes a parametric part (involving the linear equivalent circuit parameters and a nonparametric part [involving the open-circuit voltage (OCV]. Thus, even with little prior knowledge about the OCV, the PLBM can be identified using a semiparametric identification framework. Second, we model the nonparametric part of the PLBM using the truncated wavelet multiresolution analysis (MRA expansion, which leads to a parsimonious model structure that is highly desirable for model identification; using this model, the PLBM could be represented in a linear-in-parameter manner. Finally, to exploit the sparsity of the wavelet MRA representation and allow for online implementation, a penalized wavelet estimator that uses a modified online cyclic coordinate descent algorithm is proposed to identify the PLBM in a recursive fashion. The simulation and experimental results demonstrate that the proposed PLBM with the corresponding identification algorithm can accurately simulate the dynamic behavior of a lithium-ion battery in the Federal Urban Driving Schedule tests.

  9. TESTING TREE-CLASSIFIER VARIANTS AND ALTERNATE MODELING METHODOLOGIES IN THE EAST GREAT BASIN MAPPING UNIT OF THE SOUTHWEST REGIONAL GAP ANALYSIS PROJECT (SW REGAP)

    Science.gov (United States)

    We tested two methods for dataset generation and model construction, and three tree-classifier variants to identify the most parsimonious and thematically accurate mapping methodology for the SW ReGAP project. Competing methodologies were tested in the East Great Basin mapping un...

  10. Vibration modeling of structural fuzzy with continuous boundary

    DEFF Research Database (Denmark)

    Friis, Lars; Ohlrich, Mogens

    2008-01-01

    a multitude of different sprung masses each strongly resisting any motion of the main structure (master) at their base antiresonance. The “theory of structural fuzzy” is intended for modeling such high damping. In the present article the theory of fuzzy structures is briefly outlined and a method of modeling...

  11. 以多群組結構方程模式檢驗成就目標理論模式的測量恆等性 Verifying the Invariance of a Measurement Model for Achievement Goals Theory by Using the Multiple Group Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    吳中勤 Chung-Chin Wu

    2014-09-01

    Full Text Available 成就目標測量存在著測量內容、理論再概念化以及理論模式類推適用等問題,尚未受到研究的重視及實徵的檢驗。本研究之主要目的在於:一、修正成就目標測量的問題,編製六向度成就目標量表,透過競爭模式選擇適配度最佳且最具簡效性之模式;二、針對理論模式進行跨群組恆等性的檢驗;三、探討性別與教育階段對六向度成就目標的影響。研究結果顯示:一、六向度成就目標測量具有良好的信度與建構效度,為適配度最佳且最具簡效性之理論模式;二、測量具良好的跨性別群組恆等性及適中的跨教育階段測量的恆等性;三、普遍而言,男生抱持了較高的趨向導向與逃避導向目標,顯示出成就目標的複雜性。根據本研究發現,於文末提出相應之建議。 The appropriateness of measure content, ambiguity of theoretical concepts, and interferences of theoretical measure are three problems in achievement goal research. These problems have yet to receive comprehensive attention or empirical examinations. The purposes of this study were (1 to revise the problems caused by achievement goal measures and to compile a Mandarin version of a 3×2 achievement goal measure (the best fit and most parsimonious model was chosen based on model comparisons, (2 to examine the multiple group invariance of the theoretical model by utilizing multiple group structural equation modeling, and (3 to investigate the influences of gender and the educational phase on each of the 3×2 achievement goals. The results are summarized as follows: (1 the measurement of 3×2 achievement goals yielded favorable reliability and construct validity. Moreover, this model is regarded as the best fit and the most parsimonious model. (2 The measurement presents favorable validity between gender and moderate validity between the educational phase. (3 In general, boys pursued

  12. Algorithms for computing parsimonious evolutionary scenarios for genome evolution, the last universal common ancestor and dominance of horizontal gene transfer in the evolution of prokaryotes

    Directory of Open Access Journals (Sweden)

    Galperin Michael Y

    2003-01-01

    Full Text Available Abstract Background Comparative analysis of sequenced genomes reveals numerous instances of apparent horizontal gene transfer (HGT, at least in prokaryotes, and indicates that lineage-specific gene loss might have been even more common in evolution. This complicates the notion of a species tree, which needs to be re-interpreted as a prevailing evolutionary trend, rather than the full depiction of evolution, and makes reconstruction of ancestral genomes a non-trivial task. Results We addressed the problem of constructing parsimonious scenarios for individual sets of orthologous genes given a species tree. The orthologous sets were taken from the database of Clusters of Orthologous Groups of proteins (COGs. We show that the phyletic patterns (patterns of presence-absence in completely sequenced genomes of almost 90% of the COGs are inconsistent with the hypothetical species tree. Algorithms were developed to reconcile the phyletic patterns with the species tree by postulating gene loss, COG emergence and HGT (the latter two classes of events were collectively treated as gene gains. We prove that each of these algorithms produces a parsimonious evolutionary scenario, which can be represented as mapping of loss and gain events on the species tree. The distribution of the evolutionary events among the tree nodes substantially depends on the underlying assumptions of the reconciliation algorithm, e.g. whether or not independent gene gains (gain after loss after gain are permitted. Biological considerations suggest that, on average, gene loss might be a more likely event than gene gain. Therefore different gain penalties were used and the resulting series of reconstructed gene sets for the last universal common ancestor (LUCA of the extant life forms were analysed. The number of genes in the reconstructed LUCA gene sets grows as the gain penalty increases. However, qualitative examination of the LUCA versions reconstructed with different gain penalties

  13. The WITCH Model. Structure, Baseline, Solutions.

    Energy Technology Data Exchange (ETDEWEB)

    Bosetti, V.; Massetti, E.; Tavoni, M.

    2007-07-01

    WITCH - World Induced Technical Change Hybrid - is a regionally disaggregated hard link hybrid global model with a neoclassical optimal growth structure (top down) and an energy input detail (bottom up). The model endogenously accounts for technological change, both through learning curves affecting prices of new vintages of capital and through R and D investments. The model features the main economic and environmental policies in each world region as the outcome of a dynamic game. WITCH belongs to the class of Integrated Assessment Models as it possesses a climate module that feeds climate changes back into the economy. In this paper we provide a thorough discussion of the model structure and baseline projections. We report detailed information on the evolution of energy demand, technology and CO2 emissions. Finally, we explicitly quantifiy the role of free riding in determining the emissions scenarios. (auth)

  14. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  15. The Structure of Preschoolers' Emotion Knowledge: Model Equivalence and Validity Using a Structural Equation Modeling Approach

    Science.gov (United States)

    Bassett, Hideko Hamada; Denham, Susanne; Mincic, Melissa; Graling, Kelly

    2012-01-01

    Research Findings: A theory-based 2-factor structure of preschoolers' emotion knowledge (i.e., recognition of emotional expression and understanding of emotion-eliciting situations) was tested using confirmatory factor analysis. Compared to 1- and 3-factor models, the 2-factor model showed a better fit to the data. The model was found to be…

  16. Linear causal modeling with structural equations

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  17. Mechanical Model Development for Composite Structural Supercapacitors

    Science.gov (United States)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Santiago, Diana; Bednarcyk, Brett A.

    2016-01-01

    Novel composite structural supercapacitor concepts have recently been developed as a means both to store electrical charge and to provide modest mechanical load carrying capability. Double-layer composite supercapacitors are often fabricated by impregnating a woven carbon fiber fabric, which serves as the electrodes, with a structural polymer electrolyte. Polypropylene or a glass fabric is often used as the separator material. Recent research has been primarily limited to evaluating these composites experimentally. In this study, mechanical models based on the Multiscale Generalized Method of Cells (MSGMC) were developed and used to calculate the shear and tensile properties and response of two composite structural supercapacitors from the literature. The modeling approach was first validated against traditional composite laminate data. MSGMC models for composite supercapacitors were developed, and accurate elastic shear/tensile properties were obtained. It is envisioned that further development of the models presented in this work will facilitate the design of composite components for aerospace and automotive applications and can be used to screen candidate constituent materials for inclusion in future composite structural supercapacitor concepts.

  18. On the Use of Structural Equation Models in Marketing Modeling

    NARCIS (Netherlands)

    Steenkamp, J.E.B.M.; Baumgartner, H.

    2000-01-01

    We reflect on the role of structural equation modeling (SEM) in marketing modeling and managerial decision making. We discuss some benefits provided by SEM and alert marketing modelers to several recent developments in SEM in three areas: measurement analysis, analysis of cross-sectional data, and

  19. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  20. An elastic-plastic contact model for line contact structures

    Science.gov (United States)

    Zhu, Haibin; Zhao, Yingtao; He, Zhifeng; Zhang, Ruinan; Ma, Shaopeng

    2018-06-01

    Although numerical simulation tools are now very powerful, the development of analytical models is very important for the prediction of the mechanical behaviour of line contact structures for deeply understanding contact problems and engineering applications. For the line contact structures widely used in the engineering field, few analytical models are available for predicting the mechanical behaviour when the structures deform plastically, as the classic Hertz's theory would be invalid. Thus, the present study proposed an elastic-plastic model for line contact structures based on the understanding of the yield mechanism. A mathematical expression describing the global relationship between load history and contact width evolution of line contact structures was obtained. The proposed model was verified through an actual line contact test and a corresponding numerical simulation. The results confirmed that this model can be used to accurately predict the elastic-plastic mechanical behaviour of a line contact structure.

  1. Modeling of soil-water-structure interaction

    DEFF Research Database (Denmark)

    Tang, Tian

    as the developed nonlinear soil displacements and stresses under monotonic and cyclic loading. With the FVM nonlinear coupled soil models as a basis, multiphysics modeling of wave-seabed-structure interaction is carried out. The computations are done in an open source code environment, OpenFOAM, where FVM models...

  2. Antibody structural modeling with prediction of immunoglobulin structure (PIGS)

    DEFF Research Database (Denmark)

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna

    2014-01-01

    Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful...... applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (∼10 min...... on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together....

  3. ECONGAS - model structure

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ``Internationalization and structural change in the gas market`` aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab.

  4. ECONGAS - model structure

    International Nuclear Information System (INIS)

    1997-01-01

    This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ''Internationalization and structural change in the gas market'' aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab

  5. Multiplicity Control in Structural Equation Modeling

    Science.gov (United States)

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  6. Search-based model identification of smart-structure damage

    Science.gov (United States)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  7. Handbook of structural equation modeling

    CERN Document Server

    Hoyle, Rick H

    2012-01-01

    The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu

  8. Modeling High Frequency Data with Long Memory and Structural Change: A-HYEGARCH Model

    Directory of Open Access Journals (Sweden)

    Yanlin Shi

    2018-03-01

    Full Text Available In this paper, we propose an Adaptive Hyperbolic EGARCH (A-HYEGARCH model to estimate the long memory of high frequency time series with potential structural breaks. Based on the original HYGARCH model, we use the logarithm transformation to ensure the positivity of conditional variance. The structural change is further allowed via a flexible time-dependent intercept in the conditional variance equation. To demonstrate its effectiveness, we perform a range of Monte Carlo studies considering various data generating processes with and without structural changes. Empirical testing of the A-HYEGARCH model is also conducted using high frequency returns of S&P 500, FTSE 100, ASX 200 and Nikkei 225. Our simulation and empirical evidence demonstrate that the proposed A-HYEGARCH model outperforms various competing specifications and can effectively control for structural breaks. Therefore, our model may provide more reliable estimates of long memory and could be a widely useful tool for modelling financial volatility in other contexts.

  9. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    Science.gov (United States)

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  10. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  11. Database structure for plasma modeling programs

    International Nuclear Information System (INIS)

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  12. LYRA, a webserver for lymphocyte receptor structural modeling

    DEFF Research Database (Denmark)

    Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter

    2015-01-01

    the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA......The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements...... a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts...

  13. Anisotropic structure of the Inner Core and its uncertainty from transdimensional body-wave tomography

    Science.gov (United States)

    Burdick, S.; Waszek, L.; Lekic, V.

    2017-12-01

    Studies of body waves and normal modes have revealed strong quasi-hemispheric variations in seismic velocity, anisotropy and attenuation in the inner core. A rigorous mapping of the hemispheric boundaries and smaller scale heterogeneity within the hemispheres is crucial for distinguishing between hypotheses about inner core formation and evolution. However, the relatively sparse and heterogeneous distribution of paths piercing the inner core creates difficulties in constraining the boundaries and sub-hemispheric variations with body wave tomography. Damped tomographic inversions tend to smooth out strong structural gradients and risk carrying the imprint of sparse path coverage, while under-parametrized models can miss pertinent small-scale variations. For these reasons, we apply a probabilistic and transdimensional (THB) tomography method on core-sensitive differential P-wave traveltimes. The THB approach is well-suited to the problem of inner core tomography since 1) it remains parsimonious by allowing the parametrization to be determined the requirements of the data and 2) it preserves sharp boundaries in seismic properties, allowing it to capture both short-wavelength structure and the strong hemispheric dichotomy. Furthermore, the approach yields estimates of uncertainty in isotropic and anisotropic velocity, hemispheric boundary geometry, anisotropy axis and the tradeoffs between these properties. We quantify the effects of mantle heterogeneity with inner core structure and place constraints on inner core dynamics and minerology.

  14. Exploratory Topology Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Holden Deleuran, Anders; Pauly, Mark; Tamke, Martin

    2016-01-01

    The development of novel form-active hybrid structures (FAHS) is impeded by a lack of modelling tools that allow for exploratory topology modelling of shaped assemblies. We present a flexible and real-time computational design modelling pipeline developed for the exploratory modelling of FAHS...... that enables designers and engineers to iteratively construct and manipulate form-active hybrid assembly topology on the fly. The pipeline implements Kangaroo2's projection-based methods for modelling hybrid structures consisting of slender beams and cable networks. A selection of design modelling sketches...

  15. Antibody structural modeling with prediction of immunoglobulin structure (PIGS)

    KAUST Repository

    Marcatili, Paolo

    2014-11-06

    © 2014 Nature America, Inc. All rights reserved. Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (~10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together.

  16. Finite element model updating of natural fibre reinforced composite structure in structural dynamics

    Directory of Open Access Journals (Sweden)

    Sani M.S.M.

    2016-01-01

    Full Text Available Model updating is a process of making adjustment of certain parameters of finite element model in order to reduce discrepancy between analytical predictions of finite element (FE and experimental results. Finite element model updating is considered as an important field of study as practical application of finite element method often shows discrepancy to the test result. The aim of this research is to perform model updating procedure on a composite structure as well as trying improving the presumed geometrical and material properties of tested composite structure in finite element prediction. The composite structure concerned in this study is a plate of reinforced kenaf fiber with epoxy. Modal properties (natural frequency, mode shapes, and damping ratio of the kenaf fiber structure will be determined using both experimental modal analysis (EMA and finite element analysis (FEA. In EMA, modal testing will be carried out using impact hammer test while normal mode analysis using FEA will be carried out using MSC. Nastran/Patran software. Correlation of the data will be carried out before optimizing the data from FEA. Several parameters will be considered and selected for the model updating procedure.

  17. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    KAUST Repository

    Li, Yehua

    2010-06-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  18. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    KAUST Repository

    Li, Yehua; Wang, Naisyin; Carroll, Raymond J.

    2010-01-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  19. Modeling Fission Product Sorption in Graphite Structures

    International Nuclear Information System (INIS)

    Szlufarska, Izabela; Morgan, Dane; Allen, Todd

    2013-01-01

    The goal of this project is to determine changes in adsorption and desorption of fission products to/from nuclear-grade graphite in response to a changing chemical environment. First, the project team will employ principle calculations and thermodynamic analysis to predict stability of fission products on graphite in the presence of structural defects commonly observed in very high-temperature reactor (VHTR) graphites. Desorption rates will be determined as a function of partial pressure of oxygen and iodine, relative humidity, and temperature. They will then carry out experimental characterization to determine the statistical distribution of structural features. This structural information will yield distributions of binding sites to be used as an input for a sorption model. Sorption isotherms calculated under this project will contribute to understanding of the physical bases of the source terms that are used in higher-level codes that model fission product transport and retention in graphite. The project will include the following tasks: Perform structural characterization of the VHTR graphite to determine crystallographic phases, defect structures and their distribution, volume fraction of coke, and amount of sp2 versus sp3 bonding. This information will be used as guidance for ab initio modeling and as input for sorptivity models; Perform ab initio calculations of binding energies to determine stability of fission products on the different sorption sites present in nuclear graphite microstructures. The project will use density functional theory (DFT) methods to calculate binding energies in vacuum and in oxidizing environments. The team will also calculate stability of iodine complexes with fission products on graphite sorption sites; Model graphite sorption isotherms to quantify concentration of fission products in graphite. The binding energies will be combined with a Langmuir isotherm statistical model to predict the sorbed concentration of fission products

  20. Factor structure and measurement invariance across various demographic groups and over time for the PHQ-9 in primary care patients in Spain.

    Directory of Open Access Journals (Sweden)

    César González-Blanch

    Full Text Available The Patient Health Questionnaire (PHQ-9 is a widely-used screening tool for depression in primary care settings. The purpose of the present study is to identify the factor structure of the PHQ-9 and to examine the measurement invariance of this instrument across different sociodemographic groups and over time in a sample of primary care patients in Spain. Data came from 836 primary care patients enrolled in a randomized controlled trial (PsicAP study and a subsample of 218 patients who participated in a follow-up assessment at 3 months. Confirmatory factor analysis (CFA was used to test one- and two-factor structures identified in previous studies. Analyses of multiple-group invariance were conducted to determine the extent to which the factor structure is comparable across various demographic groups (i.e., gender, age, marital status, level of education, and employment situation and over time. Both one-factor and two-factor re-specified models met all the pre-established fit criteria. However, because the factors identified in the two-factor model were highly correlated (r = .86, the one-factor model was preferred for its parsimony. Multi-group CFA indicated measurement invariance across different demographic groups and across time. The present findings suggest that physicians in Spain can use the PHQ-9 to obtain a global score for depression severity in different demographic groups and to reliably monitor changes over time in the primary care setting.

  1. Three Dimensional Response Spectrum Soil Structure Modeling Versus Conceptual Understanding To Illustrate Seismic Response Of Structures

    International Nuclear Information System (INIS)

    Touqan, Abdul Razzaq

    2008-01-01

    Present methods of analysis and mathematical modeling contain so many assumptions that separate them from reality and thus represent a defect in design which makes it difficult to analyze reasons of failure. Three dimensional (3D) modeling is so superior to 1D or 2D modeling, static analysis deviates from the true nature of earthquake load which is ''a dynamic punch'', and conflicting assumptions exist between structural engineers (who assume flexible structures on rigid block foundations) and geotechnical engineers (who assume flexible foundations supporting rigid structures). Thus a 3D dynamic soil-structure interaction is a step that removes many of the assumptions and thus clears reality to a greater extent. However such a model cannot be analytically analyzed. We need to anatomize and analogize it. The paper will represent a conceptual (analogical) 1D model for soil structure interaction and clarifies it by comparing its outcome with 3D dynamic soil-structure finite element analysis of two structures. The aim is to focus on how to calculate the period of the structure and to investigate effect of variation of stiffness on soil-structure interaction

  2. Flood forecasting using a fully distributed model: application of the TOPKAPI model to the Upper Xixian Catchment

    Directory of Open Access Journals (Sweden)

    Z. Liu

    2005-01-01

    Full Text Available TOPKAPI is a physically-based, fully distributed hydrological model with a simple and parsimonious parameterisation. The original TOPKAPI is structured around five modules that represent evapotranspiration, snowmelt, soil water, surface water and channel water, respectively. Percolation to deep soil layers was ignored in the old version of the TOPKAPI model since it was not important in the basins to which the model was originally applied. Based on published literature, this study developed a new version of the TOPKAPI model, in which the new modules of interception, infiltration, percolation, groundwater flow and lake/reservoir routing are included. This paper presents an application study that makes a first attempt to derive information from public domains through the internet on the topography, soil and land use types for a case study Chinese catchment - the Upper Xixian catchment in Huaihe River with an area of about 10000 km2, and apply a new version of TOPKAPI to the catchment for flood simulation. A model parameter value adjustment was performed using six months of the 1998 dataset. Calibration did not use a curve fitting process, but was chiefly based upon moderate variations of parameter values from those estimated on physical grounds, as is common in traditional calibration. The hydrometeorological dataset of 2002 was then used to validate the model, both against the outlet discharge as well as at an internal gauging station. Finally, to complete the model performance analysis, parameter uncertainty and its effects on predictive uncertainty were also assessed by estimating a posterior parameter probability density via Bayesian inference.

  3. Mechanical modeling of the growth of salt structures

    Energy Technology Data Exchange (ETDEWEB)

    Alfaro, Ruben Alberto Mazariegos [Texas A & M Univ., College Station, TX (United States)

    1993-05-01

    A 2D numerical model for studying the morphology and history of salt structures by way of computer simulations is presented. The model is based on conservation laws for physical systems, a fluid marker equation to keep track of the salt/sediments interface, and two constitutive laws for rocksalt. When buoyancy alone is considered, the fluid-assisted diffusion model predicts evolution of salt structures 2.5 times faster than the power-law creep model. Both rheological laws predict strain rates of the order of 4.0 x 10-15 s-1 for similar structural maturity level of salt structures. Equivalent stresses and viscosities predicted by the fluid-assisted diffusion law are 102 times smaller than those predicted by the power-law creep rheology. Use of East Texas Basin sedimentation rates and power-law creep rheology indicate that differential loading is an effective mechanism to induce perturbations that amplify and evolve to mature salt structures, similar to those observed under natural geological conditions.

  4. A generative, probabilistic model of local protein structure

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Mardia, Kanti V.; Taylor, Charles C.

    2008-01-01

    Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative...... conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state...

  5. Impact damages modeling in laminated composite structures

    Directory of Open Access Journals (Sweden)

    Kreculj Dragan D.

    2014-01-01

    Full Text Available Laminated composites have an important application in modern engineering structures. They are characterized by extraordinary properties, such as: high strength and stiffness and lightweight. Nevertheless, a serious obstacle to more widespread use of those materials is their sensitivity to the impact loads. Impacts cause initiation and development of certain types of damages. Failures that occur in laminated composite structures can be intralaminar and interlaminar. To date it was developed a lot of simulation models for impact damages analysis in laminates. Those models can replace real and expensive testing in laminated structures with a certain accuracy. By using specialized software the damage parameters and distributions can be determined (at certain conditions on laminate structures. With performing numerical simulation of impact on composite laminates there are corresponding results valid for the analysis of these structures.

  6. Sparse Decomposition and Modeling of Anatomical Shape Variation

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Rostrup, Egill; Ryberg, Charlotte

    2007-01-01

    counterparts if constructed carefully. In most medical applications, models are required to have both good statistical performance and a relevant clinical interpretation to be of value. Morphometry of the corpus callosum is one illustrative example. This paper presents a method for relating spatial features...... to clinical outcome data. A set of parsimonious variables is extracted using sparse principal component analysis, producing simple yet characteristic features. The relation of these variables with clinical data is then established using a regression model. The result may be visualized as patterns...... two alternative techniques, one where features are derived using a model-based wavelet approach, and one where the original variables are regressed directly on the outcome....

  7. Outlier Detection in Structural Time Series Models

    DEFF Research Database (Denmark)

    Marczak, Martyna; Proietti, Tommaso

    investigate via Monte Carlo simulations how this approach performs for detecting additive outliers and level shifts in the analysis of nonstationary seasonal time series. The reference model is the basic structural model, featuring a local linear trend, possibly integrated of order two, stochastic seasonality......Structural change affects the estimation of economic signals, like the underlying growth rate or the seasonally adjusted series. An important issue, which has attracted a great deal of attention also in the seasonal adjustment literature, is its detection by an expert procedure. The general......–to–specific approach to the detection of structural change, currently implemented in Autometrics via indicator saturation, has proven to be both practical and effective in the context of stationary dynamic regression models and unit–root autoregressions. By focusing on impulse– and step–indicator saturation, we...

  8. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  9. VISCOELASTIC STRUCTURAL MODEL OF ASPHALT CONCRETE

    Directory of Open Access Journals (Sweden)

    V. Bogomolov

    2016-06-01

    Full Text Available The viscoelastic rheological model of asphalt concrete based on the generalized Kelvin model is offered. The mathematical model of asphalt concrete viscoelastic behavior that can be used for calculation of asphalt concrete upper layers of non-rigid pavements for strength and rutting has been developed. It has been proved that the structural model of Burgers does not fully meet all the requirements of the asphalt-concrete.

  10. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  11. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  12. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  13. Modelling the harmonized tertiary Institutions Salary Structure ...

    African Journals Online (AJOL)

    This paper analyses the Harmonized Tertiary Institution Salary Structure (HATISS IV) used in Nigeria. The irregularities in the structure are highlighted. A model that assumes a polynomial trend for the zero step salary, and exponential trend for the incremental rates, is suggested for the regularization of the structure.

  14. Structural Equation Modeling of Multivariate Time Series

    Science.gov (United States)

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  15. Multiple commodities in statistical microeconomics: Model and market

    Science.gov (United States)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  16. A Paper Model of DNA Structure and Replication.

    Science.gov (United States)

    Sigismondi, Linda A.

    1989-01-01

    A paper model which is designed to give students a hands-on experience during lecture and blackboard instruction on DNA structure is provided. A list of materials, paper patterns, and procedures for using the models to teach DNA structure and replication are given. (CW)

  17. Structure and modeling of turbulence

    International Nuclear Information System (INIS)

    Novikov, E.A.

    1995-01-01

    The open-quotes vortex stringsclose quotes scale l s ∼ LRe -3/10 (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES)

  18. Aerodynamic-structural model of offwind yacht sails

    Science.gov (United States)

    Mairs, Christopher M.

    An aerodynamic-structural model of offwind yacht sails was created that is useful in predicting sail forces. Two sails were examined experimentally and computationally at several wind angles to explore a variety of flow regimes. The accuracy of the numerical solutions was measured by comparing to experimental results. The two sails examined were a Code 0 and a reaching asymmetric spinnaker. During experiment, balance, wake, and sail shape data were recorded for both sails in various configurations. Two computational steps were used to evaluate the computational model. First, an aerodynamic flow model that includes viscosity effects was used to examine the experimental flying shapes that were recorded. Second, the aerodynamic model was combined with a nonlinear, structural, finite element analysis (FEA) model. The aerodynamic and structural models were used iteratively to predict final flying shapes of offwind sails, starting with the design shapes. The Code 0 has relatively low camber and is used at small angles of attack. It was examined experimentally and computationally at a single angle of attack in two trim configurations, a baseline and overtrimmed setting. Experimentally, the Code 0 was stable and maintained large flow attachment regions. The digitized flying shapes from experiment were examined in the aerodynamic model. Force area predictions matched experimental results well. When the aerodynamic-structural tool was employed, the predictive capability was slightly worse. The reaching asymmetric spinnaker has higher camber and operates at higher angles of attack than the Code 0. Experimentally and computationally, it was examined at two angles of attack. Like the Code 0, at each wind angle, baseline and overtrimmed settings were examined. Experimentally, sail oscillations and large flow detachment regions were encountered. The computational analysis began by examining the experimental flying shapes in the aerodynamic model. In the baseline setting, the

  19. Nonlinear structural mechanics theory, dynamical phenomena and modeling

    CERN Document Server

    Lacarbonara, Walter

    2013-01-01

    Nonlinear Structural Mechanics: Theory, Dynamical Phenomena and Modeling offers a concise, coherent presentation of the theoretical framework of nonlinear structural mechanics, computational methods, applications, parametric investigations of nonlinear phenomena and their mechanical interpretation towards design. The theoretical and computational tools that enable the formulation, solution, and interpretation of nonlinear structures are presented in a systematic fashion so as to gradually attain an increasing level of complexity of structural behaviors, under the prevailing assumptions on the geometry of deformation, the constitutive aspects and the loading scenarios. Readers will find a treatment of the foundations of nonlinear structural mechanics towards advanced reduced models, unified with modern computational tools in the framework of the prominent nonlinear structural dynamic phenomena while tackling both the mathematical and applied sciences. Nonlinear Structural Mechanics: Theory, Dynamical Phenomena...

  20. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    International Nuclear Information System (INIS)

    Winkler Wille, Mathilde M.; Dirksen, Asger; Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van; Saghir, Zaigham; Pedersen, Jesper Holst; Hohwue Thomsen, Laura; Skovgaard, Lene T.

    2015-01-01

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  1. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    Energy Technology Data Exchange (ETDEWEB)

    Winkler Wille, Mathilde M.; Dirksen, Asger [Gentofte Hospital, Department of Respiratory Medicine, Hellerup (Denmark); Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Saghir, Zaigham [Herlev Hospital, Department of Respiratory Medicine, Herlev (Denmark); Pedersen, Jesper Holst [Copenhagen University Hospital, Department of Thoracic Surgery, Rigshospitalet, Koebenhavn Oe (Denmark); Hohwue Thomsen, Laura [Hvidovre Hospital, Department of Respiratory Medicine, Hvidovre (Denmark); Skovgaard, Lene T. [University of Copenhagen, Department of Biostatistics, Koebenhavn Oe (Denmark)

    2015-10-15

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  2. The infinite sites model of genome evolution.

    Science.gov (United States)

    Ma, Jian; Ratan, Aakrosh; Raney, Brian J; Suh, Bernard B; Miller, Webb; Haussler, David

    2008-09-23

    We formalize the problem of recovering the evolutionary history of a set of genomes that are related to an unseen common ancestor genome by operations of speciation, deletion, insertion, duplication, and rearrangement of segments of bases. The problem is examined in the limit as the number of bases in each genome goes to infinity. In this limit, the chromosomes are represented by continuous circles or line segments. For such an infinite-sites model, we present a polynomial-time algorithm to find the most parsimonious evolutionary history of any set of related present-day genomes.

  3. Parsimonious data

    DEFF Research Database (Denmark)

    Kristensen, Jakob Baek; Albrechtsen, Thomas; Dahl-Nielsen, Emil

    2017-01-01

    This study shows how liking politicians’ public Facebook posts can be used as an accurate measure for predicting present-day voter intention in a multiparty system. We highlight that a few, but selective digital traces produce prediction accuracies that are on par or even greater than most curren...

  4. Stability and the structure of continuous-time economic models

    NARCIS (Netherlands)

    Nieuwenhuis, H.J.; Schoonbeek, L.

    In this paper we investigate the relationship between the stability of macroeconomic, or macroeconometric, continuous-time models and the structure of the matrices appearing in these models. In particular, we concentrate on dominant-diagonal structures. We derive general stability results for models

  5. Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.

    Science.gov (United States)

    Breskin, Alexander; Cole, Stephen R; Westreich, Daniel

    2018-05-01

    Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the inverse probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.

  6. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  7. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  8. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    International Nuclear Information System (INIS)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L.; Fouque, A.L.; Ciuciu, Ph.; Risser, L.

    2009-01-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  9. The Use of Parsimonious Questionnaires in Occupational Health Surveillance: Psychometric Properties of the Short Italian Version of the Effort/Reward Imbalance Questionnaire

    Directory of Open Access Journals (Sweden)

    Nicola Magnavita

    2012-01-01

    Full Text Available Purpose. To perform a parsimonious measurement of workplace psychosocial stress in routine occupational health surveillance, this study tests the psychometric properties of a short version of the original Italian effort-reward imbalance (ERI questionnaire. Methods. 1,803 employees (63 percent women from 19 service companies in the Italian region of Latium participated in a cross-sectional survey containing the short version of the ERI questionnaire (16 items and questions related to self-reported health, musculoskeletal complaints and job satisfaction. Exploratory factor analysis, internal consistency of scales and criterion validity were utilized. Results. The internal consistency of scales was satisfactory. Principal component analysis enabled to identify the model’s main factors. Significant associations with health and job satisfaction in the majority of cases support the notion of criterion validity. A high score on the effort-reward ratio was associated with an elevated odds ratio (OR = 2.71; 95% CI 1.86–3.95 of musculoskeletal complaints in the upper arm. Conclusions. The short form of the Italian ERI questionnaire provides a psychometrically useful tool for routine occupational health surveillance, although further validation is recommended.

  10. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  11. Modelling oil price volatility with structural breaks

    International Nuclear Information System (INIS)

    Salisu, Afees A.; Fasanya, Ismail O.

    2013-01-01

    In this paper, we provide two main innovations: (i) we analyze oil prices of two prominent markets namely West Texas Intermediate (WTI) and Brent using the two recently developed tests by Narayan and Popp (2010) and Liu and Narayan, 2010 both of which allow for two structural breaks in the data series; and (ii) the latter method is modified to include both symmetric and asymmetric volatility models. We identify two structural breaks that occur in 1990 and 2008 which coincidentally correspond to the Iraqi/Kuwait conflict and the global financial crisis, respectively. We find evidence of persistence and leverage effects in the oil price volatility. While further extensions can be pursued, the consideration of asymmetric effects as well as structural breaks should not be jettisoned when modelling oil price volatility. - Highlights: ► We analyze oil price volatility using NP (2010) and LN (2010) tests. ► We modify the LN (2010) to account for leverage effects in oil price. ► We find two structural breaks that reflect major global crisis in the oil market. ► We find evidence of persistence and leverage effects in oil price volatility. ► Leverage effects and structural breaks are fundamental in oil price modelling.

  12. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  13. Modeling and forecasting crude oil markets using ARCH-type models

    International Nuclear Information System (INIS)

    Cheong, Chin Wen

    2009-01-01

    This study investigates the time-varying volatility of two major crude oil markets, the West Texas Intermediate (WTI) and Europe Brent. A flexible autoregressive conditional heteroskedasticity (ARCH) model is used to take into account the stylized volatility facts such as clustering volatility, asymmetric news impact and long memory volatility among others. The empirical results indicate that the intensity of long-persistence volatility in the WTI is greater than in the Brent. It is also found that for the WTI, the appreciation and depreciation shocks of the WTI have similar impact on the resulting volatility. However, a leverage effect is found in Brent. Although both the estimation and diagnostic evaluations are in favor of an asymmetric long memory ARCH model, only the WTI models provide superior in the out-of-sample forecasts. On the other hand, from the empirical out-of-sample forecasts, it appears that the simplest parsimonious generalized ARCH provides the best forecasted evaluations for the Brent crude oil data.

  14. Modeling and forecasting crude oil markets using ARCH-type models

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chin Wen [Research Centre of Mathematical Sciences, Faculty of Information Technology, Multimedia University, 63100 Cyberjaya, Selangor (Malaysia)

    2009-06-15

    This study investigates the time-varying volatility of two major crude oil markets, the West Texas Intermediate (WTI) and Europe Brent. A flexible autoregressive conditional heteroskedasticity (ARCH) model is used to take into account the stylized volatility facts such as clustering volatility, asymmetric news impact and long memory volatility among others. The empirical results indicate that the intensity of long-persistence volatility in the WTI is greater than in the Brent. It is also found that for the WTI, the appreciation and depreciation shocks of the WTI have similar impact on the resulting volatility. However, a leverage effect is found in Brent. Although both the estimation and diagnostic evaluations are in favor of an asymmetric long memory ARCH model, only the WTI models provide superior in the out-of-sample forecasts. On the other hand, from the empirical out-of-sample forecasts, it appears that the simplest parsimonious generalized ARCH provides the best forecasted evaluations for the Brent crude oil data. (author)

  15. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    Science.gov (United States)

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  16. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  17. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  18. Modeling process-structure-property relationships for additive manufacturing

    Science.gov (United States)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  19. Further insights on the French WISC-IV factor structure through Bayesian structural equation modeling.

    Science.gov (United States)

    Golay, Philippe; Reverte, Isabelle; Rossier, Jérôme; Favez, Nicolas; Lecerf, Thierry

    2013-06-01

    The interpretation of the Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all cross-loadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. Structural dynamic modifications via models

    Indian Academy of Sciences (India)

    The study shows that as many as half of the matrix ... the dynamicist's analytical modelling skill which would appear both in the numerator as. Figure 2. ..... Brandon J A 1990 Strategies for structural dynamic modification (New York: John Wiley).

  1. Predicting nucleic acid binding interfaces from structural models of proteins.

    Science.gov (United States)

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  2. Modeling Temporal Evolution and Multiscale Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2013-01-01

    Many real-world networks exhibit both temporal evolution and multiscale structure. We propose a model for temporally correlated multifurcating hierarchies in complex networks which jointly capture both effects. We use the Gibbs fragmentation tree as prior over multifurcating trees and a change......-point model to account for the temporal evolution of each vertex. We demonstrate that our model is able to infer time-varying multiscale structure in synthetic as well as three real world time-evolving complex networks. Our modeling of the temporal evolution of hierarchies brings new insights...

  3. Population Genetic Structure of the Tropical Two-Wing Flyingfish (Exocoetus volitans.

    Directory of Open Access Journals (Sweden)

    Eric A Lewallen

    Full Text Available Delineating populations of pantropical marine fish is a difficult process, due to widespread geographic ranges and complex life history traits in most species. Exocoetus volitans, a species of two-winged flyingfish, is a good model for understanding large-scale patterns of epipelagic fish population structure because it has a circumtropical geographic range and completes its entire life cycle in the epipelagic zone. Buoyant pelagic eggs should dictate high local dispersal capacity in this species, although a brief larval phase, small body size, and short lifespan may limit the dispersal of individuals over large spatial scales. Based on these biological features, we hypothesized that E. volitans would exhibit statistically and biologically significant population structure defined by recognized oceanographic barriers. We tested this hypothesis by analyzing cytochrome b mtDNA sequence data (1106 bps from specimens collected in the Pacific, Atlantic and Indian oceans (n = 266. AMOVA, Bayesian, and coalescent analytical approaches were used to assess and interpret population-level genetic variability. A parsimony-based haplotype network did not reveal population subdivision among ocean basins, but AMOVA revealed limited, statistically significant population structure between the Pacific and Atlantic Oceans (ΦST = 0.035, p<0.001. A spatially-unbiased Bayesian approach identified two circumtropical population clusters north and south of the Equator (ΦST = 0.026, p<0.001, a previously unknown dispersal barrier for an epipelagic fish. Bayesian demographic modeling suggested the effective population size of this species increased by at least an order of magnitude ~150,000 years ago, to more than 1 billion individuals currently. Thus, high levels of genetic similarity observed in E. volitans can be explained by high rates of gene flow, a dramatic and recent population expansion, as well as extensive and consistent dispersal throughout the geographic

  4. Population Genetic Structure of the Tropical Two-Wing Flyingfish (Exocoetus volitans)

    Science.gov (United States)

    Lewallen, Eric A.; Bohonak, Andrew J.; Bonin, Carolina A.; van Wijnen, Andre J.; Pitman, Robert L.; Lovejoy, Nathan R.

    2016-01-01

    Delineating populations of pantropical marine fish is a difficult process, due to widespread geographic ranges and complex life history traits in most species. Exocoetus volitans, a species of two-winged flyingfish, is a good model for understanding large-scale patterns of epipelagic fish population structure because it has a circumtropical geographic range and completes its entire life cycle in the epipelagic zone. Buoyant pelagic eggs should dictate high local dispersal capacity in this species, although a brief larval phase, small body size, and short lifespan may limit the dispersal of individuals over large spatial scales. Based on these biological features, we hypothesized that E. volitans would exhibit statistically and biologically significant population structure defined by recognized oceanographic barriers. We tested this hypothesis by analyzing cytochrome b mtDNA sequence data (1106 bps) from specimens collected in the Pacific, Atlantic and Indian oceans (n = 266). AMOVA, Bayesian, and coalescent analytical approaches were used to assess and interpret population-level genetic variability. A parsimony-based haplotype network did not reveal population subdivision among ocean basins, but AMOVA revealed limited, statistically significant population structure between the Pacific and Atlantic Oceans (ΦST = 0.035, p<0.001). A spatially-unbiased Bayesian approach identified two circumtropical population clusters north and south of the Equator (ΦST = 0.026, p<0.001), a previously unknown dispersal barrier for an epipelagic fish. Bayesian demographic modeling suggested the effective population size of this species increased by at least an order of magnitude ~150,000 years ago, to more than 1 billion individuals currently. Thus, high levels of genetic similarity observed in E. volitans can be explained by high rates of gene flow, a dramatic and recent population expansion, as well as extensive and consistent dispersal throughout the geographic range of the

  5. Applications of Multilevel Structural Equation Modeling to Cross-Cultural Research

    Science.gov (United States)

    Cheung, Mike W.-L.; Au, Kevin

    2005-01-01

    Multilevel structural equation modeling (MSEM) has been proposed as an extension to structural equation modeling for analyzing data with nested structure. We have begun to see a few applications in cross-cultural research in which MSEM fits well as the statistical model. However, given that cross-cultural studies can only afford collecting data…

  6. Multiple-lesion track-structure model

    International Nuclear Information System (INIS)

    Wilson, J.W.; Cucinotta, F.A.; Shinn, J.L.

    1992-03-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions

  7. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  8. Power mos devices: structures and modelling procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rossel, P.; Charitat, G.; Tranduc, H.; Morancho, F.; Moncoqut

    1997-05-01

    In this survey, the historical evolution of power MOS transistor structures is presented and currently used devices are described. General considerations on current and voltage capabilities are discussed and configurations of popular structures are given. A synthesis of different modelling approaches proposed last three years is then presented, including analytical solutions, for basic electrical parameters such as threshold voltage, on-resistance, saturation and quasi-saturation effects, temperature influence and voltage handling capability. The numerical solutions of basic semiconductor devices is then briefly reviewed along with some typical problems which can be solved this way. A compact circuit modelling method is finally explained with emphasis on dynamic behavior modelling

  9. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  10. Development and modeling of self-deployable structures

    Science.gov (United States)

    Neogi, Depankar

    Deployable space structures are prefabricated structures which can be transformed from a closed, compact configuration to a predetermined expanded form in which they are stable and can bear loads. The present research effort investigates a new family of deployable structures, called the Self-Deployable Structures (SDS). Unlike other deployable structures, which have rigid members, the SDS members are flexible while the connecting joints are rigid. The joints store the predefined geometry of the deployed structure in the collapsed state. The SDS is stress-free in both deployed and collapsed configurations and results in a self-standing structure which acquires its structural properties after a chemical reaction. Reliability of deployment is one of the most important features of the SDS, since it does not rely on mechanisms that can lock during deployment. The unit building block of these structures is the self-deployable structural element (SDSE). Several SDSE members can be linked to generate a complex building block such as a triangular or a tetrahedral structure. Different SDSE and SDS concepts are investigated in the research work, and the performance of SDS's are experimentally and theoretically explored. Triangular and tetrahedral prototype SDS have been developed and presented. Theoretical efforts include modeling the behavior of 2-dimensional SDSs. Using this design tool, engineers can study the effects of different packing configurations and deployment sequence; and perform optimization on the collapsed state of a structure with different external constraints. The model also predicts if any lockup or entanglement occurs during deployment.

  11. Relating structure and dynamics in organisation models

    NARCIS (Netherlands)

    Jonkers, C.M.; Treur, J.

    2002-01-01

    To understand how an organisational structure relates to dynamics is an interesting fundamental challenge in the area of social modelling. Specifications of organisational structure usually have a diagrammatic form that abstracts from more detailed dynamics. Dynamic properties of agent systems,

  12. Relating structure and dynamics in organisation models

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.

    2003-01-01

    To understand how an organisational structure relates to dynamics is an interesting fundamental challenge in the area of social modelling. Specifications of organisational structure usually have a diagrammatic form that abstracts from more detailed dynamics. Dynamic properties of agent systems, on

  13. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  14. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  15. Integrative structural modeling with small angle X-ray scattering profiles

    Directory of Open Access Journals (Sweden)

    Schneidman-Duhovny Dina

    2012-07-01

    Full Text Available Abstract Recent technological advances enabled high-throughput collection of Small Angle X-ray Scattering (SAXS profiles of biological macromolecules. Thus, computational methods for integrating SAXS profiles into structural modeling are needed more than ever. Here, we review specifically the use of SAXS profiles for the structural modeling of proteins, nucleic acids, and their complexes. First, the approaches for computing theoretical SAXS profiles from structures are presented. Second, computational methods for predicting protein structures, dynamics of proteins in solution, and assembly structures are covered. Third, we discuss the use of SAXS profiles in integrative structure modeling approaches that depend simultaneously on several data types.

  16. Interactive physically-based structural modeling of hydrocarbon systems

    International Nuclear Information System (INIS)

    Bosson, Mael; Grudinin, Sergei; Bouju, Xavier; Redon, Stephane

    2012-01-01

    Hydrocarbon systems have been intensively studied via numerical methods, including electronic structure computations, molecular dynamics and Monte Carlo simulations. Typically, these methods require an initial structural model (atomic positions and types, topology, etc.) that may be produced using scripts and/or modeling tools. For many systems, however, these building methods may be ineffective, as the user may have to specify the positions of numerous atoms while maintaining structural plausibility. In this paper, we present an interactive physically-based modeling tool to construct structural models of hydrocarbon systems. As the user edits the geometry of the system, atomic positions are also influenced by the Brenner potential, a well-known bond-order reactive potential. In order to be able to interactively edit systems containing numerous atoms, we introduce a new adaptive simulation algorithm, as well as a novel algorithm to incrementally update the forces and the total potential energy based on the list of updated relative atomic positions. The computational cost of the adaptive simulation algorithm depends on user-defined error thresholds, and our potential update algorithm depends linearly with the number of updated bonds. This allows us to enable efficient physically-based editing, since the computational cost is decoupled from the number of atoms in the system. We show that our approach may be used to effectively build realistic models of hydrocarbon structures that would be difficult or impossible to produce using other tools.

  17. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  18. Multivariate Prediction Equations for HbA1c Lowering, Weight Change, and Hypoglycemic Events Associated with Insulin Rescue Medication in Type 2 Diabetes Mellitus: Informing Economic Modeling.

    Science.gov (United States)

    Willis, Michael; Asseburg, Christian; Nilsson, Andreas; Johnsson, Kristina; Kartman, Bernt

    2017-03-01

    Type 2 diabetes mellitus (T2DM) is chronic and progressive and the cost-effectiveness of new treatment interventions must be established over long time horizons. Given the limited durability of drugs, assumptions regarding downstream rescue medication can drive results. Especially for insulin, for which treatment effects and adverse events are known to depend on patient characteristics, this can be problematic for health economic evaluation involving modeling. To estimate parsimonious multivariate equations of treatment effects and hypoglycemic event risks for use in parameterizing insulin rescue therapy in model-based cost-effectiveness analysis. Clinical evidence for insulin use in T2DM was identified in PubMed and from published reviews and meta-analyses. Study and patient characteristics and treatment effects and adverse event rates were extracted and the data used to estimate parsimonious treatment effect and hypoglycemic event risk equations using multivariate regression analysis. Data from 91 studies featuring 171 usable study arms were identified, mostly for premix and basal insulin types. Multivariate prediction equations for glycated hemoglobin A 1c lowering and weight change were estimated separately for insulin-naive and insulin-experienced patients. Goodness of fit (R 2 ) for both outcomes were generally good, ranging from 0.44 to 0.84. Multivariate prediction equations for symptomatic, nocturnal, and severe hypoglycemic events were also estimated, though considerable heterogeneity in definitions limits their usefulness. Parsimonious and robust multivariate prediction equations were estimated for glycated hemoglobin A 1c and weight change, separately for insulin-naive and insulin-experienced patients. Using these in economic simulation modeling in T2DM can improve realism and flexibility in modeling insulin rescue medication. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All

  19. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  20. Acoustic Modeling of Lightweight Structures: A Literature Review

    Science.gov (United States)

    Yang, Shasha; Shen, Cheng

    2017-10-01

    This paper gives an overview of acoustic modeling for three kinds of typical lightweight structures including double-leaf plate system, stiffened single (or double) plate and porous material. Classical models are citied to provide frame work of theoretical modeling for acoustic property of lightweight structures; important research advances derived by our research group and other authors are introduced to describe the current state of art for acoustic research. Finally, remaining problems and future research directions are concluded and prospected briefly

  1. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf

  2. A structural bond strength model for glass durability

    International Nuclear Information System (INIS)

    Feng, Xiangdong; Metzger, T.B.

    1996-01-01

    A glass durability model, structural bond strength (SBS) model was developed to correlate glass durability with its composition. This model assumes that the strengths of the bonds between cations and oxygens and the structural roles of the individual elements in the glass arc the predominant factors controlling the composition dependence of the chemical durability of glasses. The structural roles of oxides in glass are classified as network formers, network breakers, and intermediates. The structural roles of the oxides depend upon glass composition and the redox state of oxides. Al 2 O 3 , ZrO 2 , Fe 2 O 3 , and B 2 O 3 are assigned as network formers only when there are sufficient alkalis to bind with these oxides. CaO can also improve durability by sharing non-bridging oxygen with alkalis, relieving SiO 2 from alkalis. The percolation phenomenon in glass is also taken into account. The SBS model is applied to correlate the 7-day product consistency test durability of 42 low-level waste glasses with their composition with an R 2 of 0.87, which is better than 0.81 obtained with an eight-coefficient empirical first-order mixture model on the same data set

  3. Dialectic Antidotes to Critics of the Technology Acceptance Model: Conceptual, Methodological, and Replication Treatments for Behavioural Modelling in Technology-Mediated Environments

    Directory of Open Access Journals (Sweden)

    Weng Marc Lim

    2018-01-01

    Full Text Available The technology acceptance model (TAM is a prominent and parsimonious conceptual lens that is often applied for behavioural modelling in technology-mediated environments. However, TAM has received a great deal of criticism in recent years. This article aims to address some of the most pertinent issues confronting TAM through a rejoinder that offers dialectic antidotes—in the form of conceptual, methodological, and replication treatments—to support the continued use of TAM to understand the peculiarities of user interactions with technology in technology-mediated environments. In doing so, this article offers a useful response to a common but often inadequately answered question about how TAM can continue to be relevant for behavioural modelling in contemporary technology-mediated environments.

  4. Molecular structure based property modeling: Development/ improvement of property models through a systematic property-data-model analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao; Sarup, Bent; Sin, Gürkan

    2013-01-01

    models. To make the property-data-model analysis fast and efficient, an approach based on the “molecular structure similarity criteria” to identify molecules (mono-functional, bi-functional, etc.) containing specified set of structural parameters (that is, groups) is employed. The method has been applied...

  5. Principles and practice of structural equation modeling

    CERN Document Server

    Kline, Rex B

    2015-01-01

    Emphasizing concepts and rationale over mathematical minutiae, this is the most widely used, complete, and accessible structural equation modeling (SEM) text. Continuing the tradition of using real data examples from a variety of disciplines, the significantly revised fourth edition incorporates recent developments such as Pearl's graphing theory and the structural causal model (SCM), measurement invariance, and more. Readers gain a comprehensive understanding of all phases of SEM, from data collection and screening to the interpretation and reporting of the results. Learning is enhanced by ex

  6. Modelling of Radiolytical Proceses in Polystyrenic Structures

    International Nuclear Information System (INIS)

    Postolache, C.

    2006-01-01

    The behavior of polystyrene, poly α-methylstyrene and poly β-methylstyrene structures in ionizing fields was analyzed using computational methods. In this study, the primary radiolytic effect was evaluated using a free radical mechanism. Molecular structures were built and geometrical optimized using quantum-chemical methods. Binding energies for different quantum states and peripheral orbitals distribution were determined. Based on obtained results it was proposed an evaluation model of radiolytical processes in polymers in solid phase. Suggested model suppose to distinguish the dominant processes by binding energies values analysis and LUMO peripheral orbital distribution. Computed binding energies analysis of energetically optimized molecular structures in ionized state (charge +1, multiplicity 2) reveals a high similitude of obtained binding energies for ionized states. The same similitude was observed also in case of total binding energies for neutral state (charge 0, multiplicity 1). Analyzed molecular structures can be associated with ionized molecule state right after one electron capture. This fact suggests that the determined stage of radiolitical fragmentation act is intermediate state of ionized molecule. This molecule captured one electron but it had no necessary time for atoms rearrangement in the molecule for new quantum state. This supposition is in accordance with literature, the time period between excitation act and fragmentation act being lower than 10 - 15 seconds. Based on realized model could be explained the behavior differences of polymeric structures in ionizing radiation field. Preferential fracture of main chains in fragmentation poly α-methylstirene can be explained in accordance with proposed model by C-C from main C bonding energies decreasing in the neighboring of quaternary C

  7. Structural identifiability analysis of a cardiovascular system model.

    Science.gov (United States)

    Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas

    2016-05-01

    The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Self-consistent mean-field models for nuclear structure

    International Nuclear Information System (INIS)

    Bender, Michael; Heenen, Paul-Henri; Reinhard, Paul-Gerhard

    2003-01-01

    The authors review the present status of self-consistent mean-field (SCMF) models for describing nuclear structure and low-energy dynamics. These models are presented as effective energy-density functionals. The three most widely used variants of SCMF's based on a Skyrme energy functional, a Gogny force, and a relativistic mean-field Lagrangian are considered side by side. The crucial role of the treatment of pairing correlations is pointed out in each case. The authors discuss other related nuclear structure models and present several extensions beyond the mean-field model which are currently used. Phenomenological adjustment of the model parameters is discussed in detail. The performance quality of the SCMF model is demonstrated for a broad range of typical applications

  9. Deep inelastic structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.)

  10. Deep inelastic structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V. (Valencia Univ. (Spain). Dept. de Didactica de las Ciencias Experimentales); Vento, V. (Valencia Univ. (Spain). Dept. de Fisica Teorica; Centro Mixto CSIC/Valencia Univ., Valencia (Spain). Inst. de Fisica Corpuscular)

    1989-10-02

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.).

  11. Tectonic forward modelling of positive inversion structures

    Energy Technology Data Exchange (ETDEWEB)

    Brandes, C. [Leibniz Univ. Hannover (Germany). Inst. fuer Geologie; Schmidt, C. [Landesamt fuer Bergbau, Energie und Geologie (LBEG), Hannover (Germany)

    2013-08-01

    Positive tectonic inversion structures are common features that were recognized in many deformed sedimentary basins (Lowell, 1995). They are characterized by a two phase fault evolution, where initial normal faulting was followed by reverse faulting along the same fault, accompanied by the development of hanging wall deformation. Analysing the evolution of such inversion structures is important for understanding the tectonics of sedimentary basins and the formation of hydrocarbon traps. We used a 2D tectonic forward modelling approach to simulate the stepwise structural evolution of inversion structures in cross-section. The modelling was performed with the software FaultFold Forward v. 6, which is based on trishear kinematics (Zehnder and Allmendinger, 2000). Key aspect of the study was to derive the controlling factors for the geometry of inversion structures. The simulation results show, that the trishear approach is able to reproduce the geometry of tectonic inversion structures in a realistic way. This implies that inversion structures are simply fault-related folds that initiated as extensional fault-propagation folds, which were subsequently transformed into compressional fault-propagation folds when the stress field changed. The hanging wall deformation is a consequence of the decrease in slip towards the tip line of the fault. Trishear angle and propagation-to-slip ratio are the key controlling factors for the geometry of the fault-related deformation. We tested trishear angles in the range of 30 - 60 and propagation-to-slip ratios between 1 and 2 in increments of 0.1. Small trishear angles and low propagation-to-slip ratios produced tight folds, whereas large trishear angles and high propagation-to-slip ratios led to more open folds with concentric shapes. This has a direct effect on the size and geometry of potential hydrocarbon traps. The 2D simulations can be extended to a pseudo 3D approach, where a set of parallel cross-sections is used to describe

  12. Plant lessons: exploring ABCB functionality through structural modeling

    Directory of Open Access Journals (Sweden)

    Aurélien eBailly

    2012-01-01

    Full Text Available In contrast to mammalian ABCB1 proteins, narrow substrate specificity has been extensively documented for plant orthologs shown to catalyze the transport of the plant hormone, auxin. Using the crystal structures of the multidrug exporters Sav1866 and MmABCB1 as templates, we have developed structural models of plant ABCB proteins with a common architecture. Comparisons of these structures identified kingdom-specific candidate substrate-binding regions within the translocation chamber formed by the transmembrane domains of ABCBs from the model plant Arabidopsis. These results suggest an early evolutionary divergence of plant and mammalian ABCBs. Validation of these models becomes a priority for efforts to elucidate ABCB function and manipulate this class of transporters to enhance plant productivity and quality.

  13. Dynamical minimalism: why less is more in psychology.

    Science.gov (United States)

    Nowak, Andrzej

    2004-01-01

    The principle of parsimony, embraced in all areas of science, states that simple explanations are preferable to complex explanations in theory construction. Parsimony, however, can necessitate a trade-off with depth and richness in understanding. The approach of dynamical minimalism avoids this trade-off. The goal of this approach is to identify the simplest mechanisms and fewest variables capable of producing the phenomenon in question. A dynamical model in which change is produced by simple rules repetitively interacting with each other can exhibit unexpected and complex properties. It is thus possible to explain complex psychological and social phenomena with very simple models if these models are dynamic. In dynamical minimalist theories, then, the principle of parsimony can be followed without sacrificing depth in understanding. Computer simulations have proven especially useful for investigating the emergent properties of simple models.

  14. Modelling of internal structure in seismic analysis of a PHWR building

    International Nuclear Information System (INIS)

    Reddy, G.R.; Vaze, K.K.; Kushawaha, H.S.; Ingle, R.K.; Subramanian, K.V.

    1991-01-01

    Seismic analysis of complex and large structures, consisting of thick shear walls, such as Reactor Building is very involved and time consuming. It is a standard practice to model the structure as a stick model to predict reasonably the dynamic behaviour of the structure. It is required to determine approximate equivalent sectional properties of Internal Structure for representation in the stick model. The restraint to warping can change the stress distribution thus affecting the centre of rigidity and torsional inertia, Hence, standard formulae does not hold good for determination of sectional properties of the Internal Structure. In this case the equivalent sectional properties for the Internal Structure are calculated using a Finite Element Model (FEM) of the Internal Structure and applying unit horizontal forces in each direction. A 3-D stick model is developed using the guidelines. Using the properties calculated by FEM and also by standard formulae, the responses of the 3-D stick model are compared. (J.P.N.)

  15. Numerical Modelling of the Dynamic Response of High-Speed Railway Bridges Considering Vehicle-Structure and Structure-Soil-Structure Interaction

    DEFF Research Database (Denmark)

    Bucinskas, Paulius; Agapii, L.; Sneideris, J.

    2015-01-01

    is idealized as a multi-degree-of-freedom system, modelled with two layers of spring-dashpot suspension systems. Coupling the vehicle system and railway track is realized through interaction forces between the wheels and the rail, where the irregularities of the track are implemented as a random stationary......The aim of this paper is the dynamic analysis of a multi-support bridge structure exposed to high-speed railway traffic. The proposed computational model has a unified approach for simultaneously accounting for the bridge structure response, soil response and forces induced by the vehicle....... The bridge structure is modelled in three dimensions based on the finite element method using two-noded three-dimensional beam elements. The track structure is composed of three layers: rail, sleepers and deck which are connected through spring-dashpot systems. The vehicle travelling along a bridge...

  16. Modeling of the atomic and electronic structures of interfaces

    International Nuclear Information System (INIS)

    Sutton, A.P.

    1988-01-01

    Recent tight binding and Car-Parrinello simulations of grain boundaries in semiconductors are reviewed. A critique is given of some models of embrittlement that are based on electronic structure considerations. The structural unit model of grain boundary structure is critically assessed using some results for mixed tilt and twist grain boundaries. A new method of characterizing interfacial structure in terms of bond angle distribution functions is described. A new formulation of thermodynamic properties of interfaces is presented which focusses on the local atomic environment. Effective, temperature dependent N-body atomic interactions are derived for studying grain boundary structure at elevated temperature

  17. Detection and analysis of unusual features in the structural model and structure-factor data of a birch pollen allergen

    International Nuclear Information System (INIS)

    Rupp, Bernhard

    2012-01-01

    The structure factors deposited with PDB entry 3k78 show properties inconsistent with experimentally observed diffraction data, and without uncertainty represent calculated structure factors. The refinement of the model against these structure factors leads to an isomorphous structure different from the deposited model with an implausibly small R value (0.019). Physically improbable features in the model of the birch pollen structure Bet v 1d are faithfully reproduced in electron density generated with the deposited structure factors, but these structure factors themselves exhibit properties that are characteristic of data calculated from a simple model and are inconsistent with the data and error model obtained through experimental measurements. The refinement of the model against these structure factors leads to an isomorphous structure different from the deposited model with an implausibly small R value (0.019). The abnormal refinement is compared with normal refinement of an isomorphous variant structure of Bet v 1l. A variety of analytical tools, including the application of Diederichs plots, Rσ plots and bulk-solvent analysis are discussed as promising aids in validation. The examination of the Bet v 1d structure also cautions against the practice of indicating poorly defined protein chain residues through zero occupancies. The recommendation to preserve diffraction images is amplified

  18. Physical Modelling of Geotechnical Structures in Ports and Offshore

    Directory of Open Access Journals (Sweden)

    Bałachowski Lech

    2017-04-01

    Full Text Available The physical modelling of subsoil behaviour and soil-structure interaction is essential for the proper design of offshore structures and port infrastructure. A brief introduction to such modelling of geoengineering problems is presented and some methods and experimental devices are described. The relationships between modelling scales are given. Some examples of penetration testing results in centrifuge and calibration chamber are presented. Prospects for physical modelling in geotechnics are also described.

  19. Measuring chronic condition self-management in an Australian community: factor structure of the revised Partners in Health (PIH) scale.

    Science.gov (United States)

    Smith, David; Harvey, Peter; Lawn, Sharon; Harris, Melanie; Battersby, Malcolm

    2017-01-01

    To evaluate the factor structure of the revised Partners in Health (PIH) scale for measuring chronic condition self-management in a representative sample from the Australian community. A series of consultations between clinical groups underpinned the revision of the PIH. The factors in the revised instrument were proposed to be: knowledge of illness and treatment, patient-health professional partnership, recognition and management of symptoms and coping with chronic illness. Participants (N = 904) reporting having a chronic illness completed the revised 12-item scale. Two a priori models, the 4-factor and bi-factor models were then evaluated using Bayesian confirmatory factor analysis (BCFA). Final model selection was established on model complexity, posterior predictive p values and deviance information criterion. Both 4-factor and bi-factor BCFA models with small informative priors for cross-loadings provided an acceptable fit with the data. The 4-factor model was shown to provide a better and more parsimonious fit with the observed data in terms of substantive theory. McDonald's omega coefficients indicated that the reliability of subscale raw scores was mostly in the acceptable range. The findings showed that the PIH scale is a relevant and structurally valid instrument for measuring chronic condition self-management in an Australian community. The PIH scale may help health professionals to introduce the concept of self-management to their patients and provide assessment of areas of self-management. A limitation is the narrow range of validated PIH measurement properties to date. Further research is needed to evaluate other important properties such as test-retest reliability, responsiveness over time and content validity.

  20. Nonlocal continuum-based modeling of mechanical characteristics of nanoscopic structures

    Energy Technology Data Exchange (ETDEWEB)

    Rafii-Tabar, Hashem, E-mail: rafii-tabar@nano.ipm.ac.ir [Department of Medical Physics and Biomedical Engineering, Faculty of Medicine, Shahid Beheshti University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ghavanloo, Esmaeal, E-mail: ghavanloo@shirazu.ac.ir [School of Mechanical Engineering, Shiraz University, Shiraz 71963-16548 (Iran, Islamic Republic of); Fazelzadeh, S. Ahmad [School of Mechanical Engineering, Shiraz University, Shiraz 71963-16548 (Iran, Islamic Republic of)

    2016-06-06

    Insight into the mechanical characteristics of nanoscopic structures is of fundamental interest and indeed poses a great challenge to the research communities around the world. These structures are ultra fine in size and consequently performing standard experiments to measure their various properties is an extremely difficult and expensive endeavor. Hence, to predict the mechanical characteristics of the nanoscopic structures, different theoretical models, numerical modeling techniques, and computer-based simulation methods have been developed. Among several proposed approaches, the nonlocal continuum-based modeling is of particular significance because the results obtained from this modeling for different nanoscopic structures are in very good agreement with the data obtained from both experimental and atomistic-based studies. A review of the essentials of this model together with its applications is presented here. Our paper is a self contained presentation of the nonlocal elasticity theory and contains the analysis of the recent works employing this model within the field of nanoscopic structures. In this review, the concepts from both the classical (local) and the nonlocal elasticity theories are presented and their applications to static and dynamic behavior of nanoscopic structures with various morphologies are discussed. We first introduce the various nanoscopic structures, both carbon-based and non carbon-based types, and then after a brief review of the definitions and concepts from classical elasticity theory, and the basic assumptions underlying size-dependent continuum theories, the mathematical details of the nonlocal elasticity theory are presented. A comprehensive discussion on the nonlocal version of the beam, the plate and the shell theories that are employed in modeling of the mechanical properties and behavior of nanoscopic structures is then provided. Next, an overview of the current literature discussing the application of the nonlocal models

  1. Nonlocal continuum-based modeling of mechanical characteristics of nanoscopic structures

    International Nuclear Information System (INIS)

    Rafii-Tabar, Hashem; Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad

    2016-01-01

    Insight into the mechanical characteristics of nanoscopic structures is of fundamental interest and indeed poses a great challenge to the research communities around the world. These structures are ultra fine in size and consequently performing standard experiments to measure their various properties is an extremely difficult and expensive endeavor. Hence, to predict the mechanical characteristics of the nanoscopic structures, different theoretical models, numerical modeling techniques, and computer-based simulation methods have been developed. Among several proposed approaches, the nonlocal continuum-based modeling is of particular significance because the results obtained from this modeling for different nanoscopic structures are in very good agreement with the data obtained from both experimental and atomistic-based studies. A review of the essentials of this model together with its applications is presented here. Our paper is a self contained presentation of the nonlocal elasticity theory and contains the analysis of the recent works employing this model within the field of nanoscopic structures. In this review, the concepts from both the classical (local) and the nonlocal elasticity theories are presented and their applications to static and dynamic behavior of nanoscopic structures with various morphologies are discussed. We first introduce the various nanoscopic structures, both carbon-based and non carbon-based types, and then after a brief review of the definitions and concepts from classical elasticity theory, and the basic assumptions underlying size-dependent continuum theories, the mathematical details of the nonlocal elasticity theory are presented. A comprehensive discussion on the nonlocal version of the beam, the plate and the shell theories that are employed in modeling of the mechanical properties and behavior of nanoscopic structures is then provided. Next, an overview of the current literature discussing the application of the nonlocal models

  2. Evolving the structure of hidden Markov Models

    DEFF Research Database (Denmark)

    won, K. J.; Prugel-Bennett, A.; Krogh, A.

    2006-01-01

    A genetic algorithm (GA) is proposed for finding the structure of hidden Markov Models (HMMs) used for biological sequence analysis. The GA is designed to preserve biologically meaningful building blocks. The search through the space of HMM structures is combined with optimization of the emission...... and transition probabilities using the classic Baum-Welch algorithm. The system is tested on the problem of finding the promoter and coding region of C. jejuni. The resulting HMM has a superior discrimination ability to a handcrafted model that has been published in the literature....

  3. Structural equation modeling and natural systems

    Science.gov (United States)

    Grace, James B.

    2006-01-01

    This book, first published in 2006, presents an introduction to the methodology of structural equation modeling, illustrates its use, and goes on to argue that it has revolutionary implications for the study of natural systems. A major theme of this book is that we have, up to this point, attempted to study systems primarily using methods (such as the univariate model) that were designed only for considering individual processes. Understanding systems requires the capacity to examine simultaneous influences and responses. Structural equation modeling (SEM) has such capabilities. It also possesses many other traits that add strength to its utility as a means of making scientific progress. In light of the capabilities of SEM, it can be argued that much of ecological theory is currently locked in an immature state that impairs its relevance. It is further argued that the principles of SEM are capable of leading to the development and evaluation of multivariate theories of the sort vitally needed for the conservation of natural systems.

  4. Localized structures and front propagation in the Lengyel-Epstein model

    DEFF Research Database (Denmark)

    Jensen, O.; Pannbacker, Viggo Ole; Mosekilde, Erik

    1994-01-01

    Pattern selection, localized structure formation, and front propagation are analyzed within the framework of a model for the chlorine dioxide-iodine-malonic acid reaction that represents a key to understanding recently obtained Turing structures. This model is distinguished from previously studied......, simple reaction-diffusion models by producing a strongly subcritical transition to stripes. The wave number for the modes of maximum linear gain is calculated and compared with the dominant wave number for the finally selected, stationary structures grown from the homogeneous steady state or developed...... bifurcation. In the subcritical regime there is an interval where the front velocity vanishes as a result of a pinning of the front to the underlying structure. In 2D, two different nucleation mechanisms for hexagonal structures are illustrated on the Lengyel-Epstein and the Brusselator model. Finally...

  5. Testing the Structure of Hydrological Models using Genetic Programming

    Science.gov (United States)

    Selle, B.; Muttil, N.

    2009-04-01

    Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  6. Bi-directional evolutionary structural optimization for strut-and-tie modelling of three-dimensional structural concrete

    Science.gov (United States)

    Shobeiri, Vahid; Ahmadi-Nedushan, Behrouz

    2017-12-01

    This article presents a method for the automatic generation of optimal strut-and-tie models in reinforced concrete structures using a bi-directional evolutionary structural optimization method. The methodology presented is developed for compliance minimization relying on the Abaqus finite element software package. The proposed approach deals with the generation of truss-like designs in a three-dimensional environment, addressing the design of corbels and joints as well as bridge piers and pile caps. Several three-dimensional examples are provided to show the capabilities of the proposed framework in finding optimal strut-and-tie models in reinforced concrete structures and verifying its efficiency to cope with torsional actions. Several issues relating to the use of the topology optimization for strut-and-tie modelling of structural concrete, such as chequerboard patterns, mesh-dependency and multiple load cases, are studied. In the last example, a design procedure for detailing and dimensioning of the strut-and-tie models is given according to the American Concrete Institute (ACI) 318-08 provisions.

  7. Fast flexible modeling of RNA structure using internal coordinates.

    Science.gov (United States)

    Flores, Samuel Coulbourn; Sherman, Michael A; Bruns, Christopher M; Eastman, Peter; Altman, Russ Biagio

    2011-01-01

    Modeling the structure and dynamics of large macromolecules remains a critical challenge. Molecular dynamics (MD) simulations are expensive because they model every atom independently, and are difficult to combine with experimentally derived knowledge. Assembly of molecules using fragments from libraries relies on the database of known structures and thus may not work for novel motifs. Coarse-grained modeling methods have yielded good results on large molecules but can suffer from difficulties in creating more detailed full atomic realizations. There is therefore a need for molecular modeling algorithms that remain chemically accurate and economical for large molecules, do not rely on fragment libraries, and can incorporate experimental information. RNABuilder works in the internal coordinate space of dihedral angles and thus has time requirements proportional to the number of moving parts rather than the number of atoms. It provides accurate physics-based response to applied forces, but also allows user-specified forces for incorporating experimental information. A particular strength of RNABuilder is that all Leontis-Westhof basepairs can be specified as primitives by the user to be satisfied during model construction. We apply RNABuilder to predict the structure of an RNA molecule with 160 bases from its secondary structure, as well as experimental information. Our model matches the known structure to 10.2 Angstroms RMSD and has low computational expense.

  8. A hidden markov model derived structural alphabet for proteins.

    Science.gov (United States)

    Camproux, A C; Gautier, R; Tufféry, P

    2004-06-04

    Understanding and predicting protein structures depends on the complexity and the accuracy of the models used to represent them. We have set up a hidden Markov model that discretizes protein backbone conformation as series of overlapping fragments (states) of four residues length. This approach learns simultaneously the geometry of the states and their connections. We obtain, using a statistical criterion, an optimal systematic decomposition of the conformational variability of the protein peptidic chain in 27 states with strong connection logic. This result is stable over different protein sets. Our model fits well the previous knowledge related to protein architecture organisation and seems able to grab some subtle details of protein organisation, such as helix sub-level organisation schemes. Taking into account the dependence between the states results in a description of local protein structure of low complexity. On an average, the model makes use of only 8.3 states among 27 to describe each position of a protein structure. Although we use short fragments, the learning process on entire protein conformations captures the logic of the assembly on a larger scale. Using such a model, the structure of proteins can be reconstructed with an average accuracy close to 1.1A root-mean-square deviation and for a complexity of only 3. Finally, we also observe that sequence specificity increases with the number of states of the structural alphabet. Such models can constitute a very relevant approach to the analysis of protein architecture in particular for protein structure prediction.

  9. Measuring and modelling the structure of chocolate

    Science.gov (United States)

    Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim

    2015-01-01

    The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.

  10. Three novel approaches to structural identifiability analysis in mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  11. Icosahedral symmetry described by an incommensurately modulated crystal structure model

    DEFF Research Database (Denmark)

    Wolny, Janusz; Lebech, Bente

    1986-01-01

    A crystal structure model of an incommensurately modulated structure is presented. Although six different reciprocal vectors are used to describe the model, all calculations are done in three dimensions making calculation of the real-space structure trivial. Using this model, it is shown that both...... the positions of the bragg reflections and information about the relative intensities of these reflections are in full accordance with the diffraction patterns reported for microcrystals of the rapidly quenched Al86Mn14 alloy. It is also shown that at least the local structure possesses full icosahedral...

  12. Geological-structural models used in SR 97. Uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Saksa, P.; Nummela, J. [FINTACT Oy (Finland)

    1998-10-01

    The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km{sup 3}. Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that

  13. Geological-structural models used in SR 97. Uncertainty analysis

    International Nuclear Information System (INIS)

    Saksa, P.; Nummela, J.

    1998-10-01

    The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km 3 . Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that the

  14. Assessing the accuracy of ancestral protein reconstruction methods.

    Directory of Open Access Journals (Sweden)

    Paul D Williams

    2006-06-01

    Full Text Available The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.

  15. Assessing the accuracy of ancestral protein reconstruction methods.

    Science.gov (United States)

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-06-23

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.

  16. Track structure model of cell damage in space flight

    Science.gov (United States)

    Katz, Robert; Cucinotta, Francis A.; Wilson, John W.; Shinn, Judy L.; Ngo, Duc M.

    1992-01-01

    The phenomenological track-structure model of cell damage is discussed. A description of the application of the track-structure model with the NASA Langley transport code for laboratory and space radiation is given. Comparisons to experimental results for cell survival during exposure to monoenergetic, heavy-ion beams are made. The model is also applied to predict cell damage rates and relative biological effectiveness for deep-space exposures.

  17. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    Science.gov (United States)

    2017-09-19

    NUWC-NPT Technical Report 12,236 19 September 2017 Structural Acoustic Physics -Based Modeling of Curved Composite Shells Rachel E. Hesse...SUBTITLE Structural Acoustic Physics -Based Modeling of Curved Composite Shells 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...study was to use physics -based modeling (PBM) to investigate wave propagations through curved shells that are subjected to acoustic excitation. An

  18. Consequences of Collective-Focused Leadership and Differentiated Individual-Focused Leadership : Development and Testing of an Organizational-Level Model

    OpenAIRE

    Kunze, Florian; de Jong, Simon Barend; Bruch, Heike

    2016-01-01

    Recent advances in leadership research suggest that collective-focused leadership climate and differentiated individual-focused leadership might simultaneously, yet oppositely, affect collective outcomes. The present study extends this literature by addressing open questions regarding theory, methods, statistics, and level of analysis. Therefore, a new and more parsimonious theoretical model is developed on the organizational-level of analysis. Drawing on the commitment literature, we argue f...

  19. Conservation of concrete structures according to fib Model Code 2010

    NARCIS (Netherlands)

    Matthews, S.; Bigaj-Van Vliet, A.; Ueda, T.

    2013-01-01

    Conservation of concrete structures forms an essential part of the fib Model Code for Concrete Structures 2010 (fib Model Code 2010). In particular, Chapter 9 of fib Model Code 2010 addresses issues concerning conservation strategies and tactics, conservation management, condition surveys, condition

  20. CONFOLD2: improved contact-driven ab initio protein structure modeling.

    Science.gov (United States)

    Adhikari, Badri; Cheng, Jianlin

    2018-01-25

    Contact-guided protein structure prediction methods are becoming more and more successful because of the latest advances in residue-residue contact prediction. To support contact-driven structure prediction, effective tools that can quickly build tertiary structural models of good quality from predicted contacts need to be developed. We develop an improved contact-driven protein modelling method, CONFOLD2, and study how it may be effectively used for ab initio protein structure prediction with predicted contacts as input. It builds models using various subsets of input contacts to explore the fold space under the guidance of a soft square energy function, and then clusters the models to obtain the top five models. CONFOLD2 obtains an average reconstruction accuracy of 0.57 TM-score for the 150 proteins in the PSICOV contact prediction dataset. When benchmarked on the CASP11 contacts predicted using CONSIP2 and CASP12 contacts predicted using Raptor-X, CONFOLD2 achieves a mean TM-score of 0.41 on both datasets. CONFOLD2 allows to quickly generate top five structural models for a protein sequence when its secondary structures and contacts predictions at hand. The source code of CONFOLD2 is publicly available at https://github.com/multicom-toolbox/CONFOLD2/ .

  1. Structural model of dodecameric heat-shock protein Hsp21

    DEFF Research Database (Denmark)

    Rutsdottir, Gudrun; Härmark, Johan; Weide, Yoran

    2017-01-01

    for investigating structure-function relationships of Hsp21 and understanding these sequence variations, we developed a structural model of Hsp21 based on homology modeling, cryo-EM, cross-linking mass spectrometry, NMR, and small-angle X-ray scattering. Our data suggest a dodecameric arrangement of two trimer...

  2. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  3. Bourbaki's structure theory in the problem of complex systems simulation models synthesis and model-oriented programming

    Science.gov (United States)

    Brodsky, Yu. I.

    2015-01-01

    The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.

  4. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    Science.gov (United States)

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  5. Enhancement of a parsimonious water balance model to simulate surface hydrology in a glacierized watershed

    Science.gov (United States)

    Valentin, Melissa M.; Viger, Roland J.; Van Beusekom, Ashley E.; Hay, Lauren E.; Hogue, Terri S.; Foks, Nathan Leon

    2018-01-01

    The U.S. Geological Survey monthly water balance model (MWBM) was enhanced with the capability to simulate glaciers in order to make it more suitable for simulating cold region hydrology. The new model, MWBMglacier, is demonstrated in the heavily glacierized and ecologically important Copper River watershed in Southcentral Alaska. Simulated water budget components compared well to satellite‐based observations and ground measurements of streamflow, evapotranspiration, snow extent, and total water storage, with differences ranging from 0.2% to 7% of the precipitation flux. Nash Sutcliffe efficiency for simulated and observed streamflow was greater than 0.8 for six of eight stream gages. Snow extent matched satellite‐based observations with Nash Sutcliffe efficiency values of greater than 0.89 in the four Copper River ecoregions represented. During the simulation period 1949 to 2009, glacier ice melt contributed 25% of total runoff, ranging from 12% to 45% in different tributaries, and glacierized area was reduced by 6%. Statistically significant (p < 0.05) decreasing and increasing trends in annual glacier mass balance occurred during the multidecade cool and warm phases of the Pacific Decadal Oscillation, respectively, reinforcing the link between climate perturbations and glacier mass balance change. The simulations of glaciers and total runoff for a large, remote region of Alaska provide useful data to evaluate hydrologic, cryospheric, ecologic, and climatic trends. MWBM glacier is a valuable tool to understand when, and to what extent, streamflow may increase or decrease as glaciers respond to a changing climate.

  6. A resource for benchmarking the usefulness of protein structure models.

    Science.gov (United States)

    Carbajo, Daniel; Tramontano, Anna

    2012-08-02

    Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by non-academics: No.

  7. A resource for benchmarking the usefulness of protein structure models

    Directory of Open Access Journals (Sweden)

    Carbajo Daniel

    2012-08-01

    Full Text Available Abstract Background Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. Results This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. Conclusions The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. Implementation, availability and requirements Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php. Operating system(s: Platform independent. Programming language: Perl-BioPerl (program; mySQL, Perl DBI and DBD modules (database; php, JavaScript, Jmol scripting (web server. Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet and PSAIA. License: Free. Any

  8. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  9. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel; Tramontano, Anna

    2012-01-01

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  10. A theoretical model to predict customer satisfaction in relation to service quality in selected university libraries in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Chaminda Jayasundara

    2009-01-01

    Full Text Available University library administrators in Sri Lanka have begun to search for alternative ways to satisfy their clientele on the basis of service quality. This article aims at providing a theoretical model to facilitate the identification of service quality attributes and domains that may be used to predict customer satisfaction from a service quality perspective. The effectiveness of existing service quality models such as LibQUAL, SERVQUAL and SERVPREF have been questioned. In that regard, this study developed a theoretical model for academic libraries in Sri Lanka based on the disconfirmation and performance-only paradigms. These perspectives were considered by researchers to be the core mechanism to develop service quality/customer satisfaction models. The attributes and domain identification of service quality was carried out with a stratified sample of 263 participants selected from postgraduate and undergraduate students and academic staff members from the faculties of Arts in four universities in Sri Lanka. The study established that responsiveness, supportiveness, building environment, collection and access, furniture and facilities, technology, Web services and service delivery were quality domains which can be used to predict customer satisfaction. The theoretical model is unique in its domain structure compared to the existing models. The model needs to be statistically tested to make it valid and parsimonious.

  11. PSpice Model of Lightning Strike to a Steel Reinforced Structure

    International Nuclear Information System (INIS)

    Koone, Neil; Condren, Brian

    2003-01-01

    Surges and arcs from lightning can pose hazards to personnel and sensitive equipment, and processes. Steel reinforcement in structures can act as a Faraday cage mitigating lightning effects. Knowing a structure's response to a lightning strike allows hazards associated with lightning to be analyzed. A model of lightning's response in a steel reinforced structure has been developed using PSpice (a commercial circuit simulation). Segments of rebar are modeled as inductors and resistors in series. A program has been written to take architectural information of a steel reinforced structure and 'build' a circuit network that is analogous to the network of reinforcement in a facility. A severe current waveform (simulating a 99th percentile lightning strike), modeled as a current source, is introduced in the circuit network, and potential differences within the structure are determined using PSpice. A visual three-dimensional model of the facility displays the voltage distribution across the structure using color to indicate the potential difference relative to the floor. Clear air arcing distances can be calculated from the voltage distribution using a conservative value for the dielectric breakdown strength of air. Potential validation tests for the model will be presented

  12. Structural characterization and viscoelastic constitutive modeling of skin.

    Science.gov (United States)

    Sherman, Vincent R; Tang, Yizhe; Zhao, Shiteng; Yang, Wen; Meyers, Marc A

    2017-04-15

    A fascinating material, skin has a tensile response which exhibits an extended toe region of minimal stress up to nominal strains that, in some species, exceed 1, followed by significant stiffening until a roughly linear region. The large toe region has been attributed to its unique structure, consisting of a network of curved collagen fibers. Investigation of the structure of rabbit skin reveals that it consists of layers of wavy fibers, each one with a characteristic orientation. Additionally, the existence of two preferred layer orientations is suggested based on the results of small angle X-ray scattering. These observations are used to construct a viscoelastic model consisting of collagen in two orientations, which leads to an in-plane anisotropic response. The structure-based model presented incorporates the elastic straightening and stretching of fibrils, their rotation towards the tensile axis, and the viscous effects which occur in the matrix of the skin due to interfibrillar and interlamellar sliding. The model is shown to effectively capture key features which dictate the mechanical response of skin. Examination by transmission and scanning electron microscopy of rabbit dermis enabled the identification of the key elements in its structure. The organization of collagen fibrils into flat fibers was identified and incorporated into a constitutive model that reproduces the mechanical response of skin. This enhanced quantitative predictive capability can be used in the design of synthetic skin and skin-like structures. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  13. UTILIZAÇÃO DE DIFERENTES ESTRUTURAS DE VARIÂNCIA RESIDUAL EM MODELOS DE REGRESSÃO ALEATÓRIA PARA DESCRIÇÃO DA CURVA DE CRESCIMENTO DE PERDIZES (Rhynchotus rufescens CRIADAS EM CATIVEIRO

    Directory of Open Access Journals (Sweden)

    Patrícia Tholon

    2008-01-01

    Full Text Available Random regression models (RRM allows considering heterogeneous residual variances to describe the growth for each age. However, this feature increases the number of parameters to be estimated in the maximization likelihood function process. Searching for more parsimonious RRM, several approaches have been suggested. One of them is the use of different structures of residual variances modelled through step function in different classes with similar variance or through variance functions. A total of 7,369 records of body weight of partridges, measured from birth to 210 days of partridges born from 2000 to 2004 were used in this research. The random regression models applied to the data set considered different structures of residual variances and were performed by the restricted maximum likelihood method. The residual variances were modeled using classes of 210 (R210 and 30 (R30 ages and variance functions with orders ranging from quadratic (VF2 to nine (VF9. The R30 considered birds weighted in the same week. The random effects included were the genetic additive direct and the permanent environment effects of the animal. It was not possible to include the maternal effects in the models. All random effects were modelled by sixth order regression on Legendre polynomials. The models were compared by the likelihood ratio test, the Akaike's information criterion and the Schwarz's Bayesian information criterion. Best results were showed by the models R210 and VF5. In conclusion, the most parsimonious model was VF5 and should be applied to fit growth records of partridges.

  14. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  15. Heterogeneous traffic flow modelling using second-order macroscopic continuum model

    Science.gov (United States)

    Mohan, Ranju; Ramadurai, Gitakrishnan

    2017-01-01

    Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.

  16. Conformational sampling in template-free protein loop structure modeling: an overview.

    Science.gov (United States)

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.

  17. CONFORMATIONAL SAMPLING IN TEMPLATE-FREE PROTEIN LOOP STRUCTURE MODELING: AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    Yaohang Li

    2013-02-01

    Full Text Available Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.

  18. Galactic models with variable spiral structure

    International Nuclear Information System (INIS)

    James, R.A.; Sellwood, J.A.

    1978-01-01

    A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

  19. PROBLEMS OF MATHEMATICAL MODELING OF THE ENTERPRISES ORGANIZATIONAL STRUCTURE

    Directory of Open Access Journals (Sweden)

    N. V. Andrianov

    2006-01-01

    Full Text Available The analysis of the mathematical models which can be used at optimization of the control system of the enterprise organizational structure is presented. The new approach to the mathematical modeling of the enterprise organizational structure, based on using of temporary characteristics of the control blocks working, is formulated

  20. A micromagnetic study of domain structure modeling

    International Nuclear Information System (INIS)

    Matsuo, Tetsuji; Mimuro, Naoki; Shimasaki, Masaaki

    2008-01-01

    To develop a mesoscopic model for magnetic-domain behavior, a domain structure model (DSM) was examined and compared with a micromagnetic simulation. The domain structure of this model is given by several domains with uniform magnetization vectors and domain walls. The directions of magnetization vectors and the locations of domain walls are determined so as to minimize the magnetic total energy of the magnetic material. The DSM was modified to improve its representation capability for domain behavior. The domain wall energy is multiplied by a vanishing factor to represent the disappearance of magnetic domain. The sequential quadratic programming procedure is divided into two steps to improve an energy minimization process. A comparison with micromagnetic simulation shows that the modified DSM improves the representation accuracy of the magnetization process

  1. Modeling the Structure and Complexity of Engineering Routine Design Problems

    NARCIS (Netherlands)

    Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.

    2011-01-01

    This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development

  2. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    Science.gov (United States)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  3. Travelling Wave Solutions in Multigroup Age-Structured Epidemic Models

    Science.gov (United States)

    Ducrot, Arnaut; Magal, Pierre; Ruan, Shigui

    2010-01-01

    Age-structured epidemic models have been used to describe either the age of individuals or the age of infection of certain diseases and to determine how these characteristics affect the outcomes and consequences of epidemiological processes. Most results on age-structured epidemic models focus on the existence, uniqueness, and convergence to disease equilibria of solutions. In this paper we investigate the existence of travelling wave solutions in a deterministic age-structured model describing the circulation of a disease within a population of multigroups. Individuals of each group are able to move with a random walk which is modelled by the classical Fickian diffusion and are classified into two subclasses, susceptible and infective. A susceptible individual in a given group can be crisscross infected by direct contact with infective individuals of possibly any group. This process of transmission can depend upon the age of the disease of infected individuals. The goal of this paper is to provide sufficient conditions that ensure the existence of travelling wave solutions for the age-structured epidemic model. The case of two population groups is numerically investigated which applies to the crisscross transmission of feline immunodeficiency virus (FIV) and some sexual transmission diseases.

  4. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  5. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  6. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  7. Fast loop modeling for protein structures

    Science.gov (United States)

    Zhang, Jiong; Nguyen, Son; Shang, Yi; Xu, Dong; Kosztin, Ioan

    2015-03-01

    X-ray crystallography is the main method for determining 3D protein structures. In many cases, however, flexible loop regions of proteins cannot be resolved by this approach. This leads to incomplete structures in the protein data bank, preventing further computational study and analysis of these proteins. For instance, all-atom molecular dynamics (MD) simulation studies of structure-function relationship require complete protein structures. To address this shortcoming, we have developed and implemented an efficient computational method for building missing protein loops. The method is database driven and uses deep learning and multi-dimensional scaling algorithms. We have implemented the method as a simple stand-alone program, which can also be used as a plugin in existing molecular modeling software, e.g., VMD. The quality and stability of the generated structures are assessed and tested via energy scoring functions and by equilibrium MD simulations. The proposed method can also be used in template-based protein structure prediction. Work supported by the National Institutes of Health [R01 GM100701]. Computer time was provided by the University of Missouri Bioinformatics Consortium.

  8. Uncertain and multi-objective programming models for crop planting structure optimization

    Directory of Open Access Journals (Sweden)

    Mo LI,Ping GUO,Liudong ZHANG,Chenglong ZHANG

    2016-03-01

    Full Text Available Crop planting structure optimization is a significant way to increase agricultural economic benefits and improve agricultural water management. The complexities of fluctuating stream conditions, varying economic profits, and uncertainties and errors in estimated modeling parameters, as well as the complexities among economic, social, natural resources and environmental aspects, have led to the necessity of developing optimization models for crop planting structure which consider uncertainty and multi-objectives elements. In this study, three single-objective programming models under uncertainty for crop planting structure optimization were developed, including an interval linear programming model, an inexact fuzzy chance-constrained programming (IFCCP model and an inexact fuzzy linear programming (IFLP model. Each of the three models takes grayness into account. Moreover, the IFCCP model considers fuzzy uncertainty of parameters/variables and stochastic characteristics of constraints, while the IFLP model takes into account the fuzzy uncertainty of both constraints and objective functions. To satisfy the sustainable development of crop planting structure planning, a fuzzy-optimization-theory-based fuzzy linear multi-objective programming model was developed, which is capable of reflecting both uncertainties and multi-objective. In addition, a multi-objective fractional programming model for crop structure optimization was also developed to quantitatively express the multi-objective in one optimization model with the numerator representing maximum economic benefits and the denominator representing minimum crop planting area allocation. These models better reflect actual situations, considering the uncertainties and multi-objectives of crop planting structure optimization systems. The five models developed were then applied to a real case study in Minqin County, north-west China. The advantages, the applicable conditions and the solution methods

  9. Structural modelling of economic growth: Technological changes

    Directory of Open Access Journals (Sweden)

    Sukharev Oleg

    2016-01-01

    Full Text Available Neoclassical and Keynesian theories of economic growth assume the use of Cobb-Douglas modified functions and other aggregate econometric approaches to growth dynamics modelling. In that case explanations of economic growth are based on the logic of the used mathematical ratios often including the ideas about aggregated values change and factors change a priori. The idea of assessment of factor productivity is the fundamental one among modern theories of economic growth. Nevertheless, structural parameters of economic system, institutions and technological changes are practically not considered within known approaches, though the latter is reflected in the changing parameters of production function. At the same time, on the one hand, the ratio of structural elements determines the future value of the total productivity of the factors and, on the other hand, strongly influences the rate of economic growth and its mode of innovative dynamics. To put structural parameters of economic system into growth models with the possibility of assessment of such modes under conditions of interaction of new and old combinations is an essential step in the development of the theory of economic growth/development. It allows forming stimulation policy of economic growth proceeding from the structural ratios and relations recognized for this economic system. It is most convenient in such models to use logistic functions demonstrating the resource change for old and new combination within the economic system. The result of economy development depends on starting conditions, and on institutional parameters of velocity change of resource borrowing in favour of a new combination and creation of its own resource. Model registration of the resource is carried out through the idea of investments into new and old combinations.

  10. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  11. Uncertainty in parameterisation and model structure affect simulation results in coupled ecohydrological models

    Directory of Open Access Journals (Sweden)

    S. Arnold

    2009-10-01

    Full Text Available In this paper we develop and apply a conceptual ecohydrological model to investigate the effects of model structure and parameter uncertainty on the simulation of vegetation structure and hydrological dynamics. The model is applied for a typical water limited riparian ecosystem along an ephemeral river: the middle section of the Kuiseb River in Namibia. We modelled this system by coupling an ecological model with a conceptual hydrological model. The hydrological model is storage based with stochastical forcing from the flood. The ecosystem is modelled with a population model, and represents three dominating riparian plant populations. In appreciation of uncertainty about population dynamics, we applied three model versions with increasing complexity. Population parameters were found by Latin hypercube sampling of the parameter space and with the constraint that three species should coexist as observed. Two of the three models were able to reproduce the observed coexistence. However, both models relied on different coexistence mechanisms, and reacted differently to change of long term memory in the flood forcing. The coexistence requirement strongly constrained the parameter space for both successful models. Only very few parameter sets (0.5% of 150 000 samples allowed for coexistence in a representative number of repeated simulations (at least 10 out of 100 and the success of the coexistence mechanism was controlled by the combination of population parameters. The ensemble statistics of average values of hydrologic variables like transpiration and depth to ground water were similar for both models, suggesting that they were mainly controlled by the applied hydrological model. The ensemble statistics of the fluctuations of depth to groundwater and transpiration, however, differed significantly, suggesting that they were controlled by the applied ecological model and coexistence mechanisms. Our study emphasizes that uncertainty about ecosystem

  12. Robust simulation of buckled structures using reduced order modeling

    International Nuclear Information System (INIS)

    Wiebe, R.; Perez, R.A.; Spottswood, S.M.

    2016-01-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties. (paper)

  13. Robust simulation of buckled structures using reduced order modeling

    Science.gov (United States)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  14. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  15. Soil-structure interaction analysis of HTTR building by a simplified model

    International Nuclear Information System (INIS)

    Yagishita, F.; Suzuki, H.; Yamagishi, Y.

    1990-01-01

    For the evaluation of the design seismic forces of the embedded High-Temperature-Testing-Reactor (HTTR) structure, a sway-rocking model considering the embedment of the structure is used. As for the composition of this model; the structure is modeled into beams with lumped masses, and the soil into the horizontal side springs and the horizontal and rotational bottom springs. At the same time, the input motion to the structure which has the form of multiple excitation is calculated based on one dimensional wave propagation theory. This paper presents the concept of this modelling and evaluated results. (author). 9 refs, 11 figs

  16. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  17. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  18. Modeling repetitive motions using structured light.

    Science.gov (United States)

    Xu, Yi; Aliaga, Daniel G

    2010-01-01

    Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

  19. Coarse-grained description of cosmic structure from Szekeres models

    International Nuclear Information System (INIS)

    Sussman, Roberto A.; Gaspar, I. Delgado; Hidalgo, Juan Carlos

    2016-01-01

    We show that the full dynamical freedom of the well known Szekeres models allows for the description of elaborated 3-dimensional networks of cold dark matter structures (over-densities and/or density voids) undergoing ''pancake'' collapse. By reducing Einstein's field equations to a set of evolution equations, which themselves reduce in the linear limit to evolution equations for linear perturbations, we determine the dynamics of such structures, with the spatial comoving location of each structure uniquely specified by standard early Universe initial conditions. By means of a representative example we examine in detail the density contrast, the Hubble flow and peculiar velocities of structures that evolved, from linear initial data at the last scattering surface, to fully non-linear 10–20 Mpc scale configurations today. To motivate further research, we provide a qualitative discussion on the connection of Szekeres models with linear perturbations and the pancake collapse of the Zeldovich approximation. This type of structure modelling provides a coarse grained—but fully relativistic non-linear and non-perturbative —description of evolving large scale cosmic structures before their virialisation, and as such it has an enormous potential for applications in cosmological research

  20. Development of vehicle model test-bending of a simple structural surfaces model for automotive vehicle sedan

    Science.gov (United States)

    Nor, M. K. Mohd; Noordin, A.; Ruzali, M. F. S.; Hussen, M. H.; Mustapa@Othman, N.

    2017-04-01

    Simple Structural Surfaces (SSS) method is offered as a means of organizing the process for rationalizing the basic vehicle body structure load paths. The application of this simplified approach is highly beneficial in the development of modern passenger car structure design. In Malaysia, the SSS topic has been widely adopted and seems compulsory in various automotive programs related to automotive vehicle structures in many higher education institutions. However, there is no real physical model of SSS available to gain considerable insight and understanding into the function of each major subassembly in the whole vehicle structures. Based on this motivation, a real physical SSS of sedan model and the corresponding model vehicle tests of bending is proposed in this work. The proposed approach is relatively easy to understand as compared to Finite Element Method (FEM). The results prove that the proposed vehicle model test is useful to physically demonstrate the importance of providing continuous load path using the necessary structural components within the vehicle structures. It is clearly observed that the global bending stiffness reduce significantly when more panels are removed from the complete SSS model. The analysis shows the front parcel shelf is an important subassembly to sustain bending load.

  1. Protein loop modeling using a new hybrid energy function and its application to modeling in inaccurate structural environments.

    Directory of Open Access Journals (Sweden)

    Hahnbeom Park

    Full Text Available Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.

  2. Calibrated and Interactive Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel

    2016-01-01

    Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....

  3. Multivariate time series analysis with R and financial applications

    CERN Document Server

    Tsay, Ruey S

    2013-01-01

    Since the publication of his first book, Analysis of Financial Time Series, Ruey Tsay has become one of the most influential and prominent experts on the topic of time series. Different from the traditional and oftentimes complex approach to multivariate (MV) time series, this sequel book emphasizes structural specification, which results in simplified parsimonious VARMA modeling and, hence, eases comprehension. Through a fundamental balance between theory and applications, the book supplies readers with an accessible approach to financial econometric models and their applications to real-worl

  4. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling was performed by using I-TASSER server. The final model obtained was accessed by PROCHECK and DFIRE2, which confirmed that the final model is reliable. Until complete biochemical ...

  5. Exploring Social Structures in Extended Team Model

    DEFF Research Database (Denmark)

    Zahedi, Mansooreh; Ali Babar, Muhammad

    2013-01-01

    Extended Team Model (ETM) as a type of offshore outsourcing is increasingly becoming popular mode of Global Software Development (GSD). There is little knowledge about the social structures in ETM and their impact on collaboration. Within a large interdisciplinary project to develop the next...... generation of GSD technologies, we are exploring the role of social structures to support collaboration. This paper reports some details of our research design and initial findings about the mechanisms to support social structures and their impact on collaboration in an ETM....

  6. Equity financing constraints and corporate capital structure:a model

    Institute of Scientific and Technical Information of China (English)

    Zhengwei Wang; Wuxiang Zhu

    2013-01-01

    Purpose-The "supply-side effect" brought about by the imperfection of the capital market has increasingly been concerned.The purpose of this paper is to study how will the uncertainty of equity financing brought about by the equity financing regulations in emerging capital market affect company's capital structure decisions.Design/methodology/approach-This paper establishes a theoretical model and tries to introduce equity financing uncertainty into the company's capital structure decision-making.The paper uses mathematical derivation method to get some basic conclusions.Next,in order to characterize the quantitative impact of specific factor on capital structure,numerical solution methods are used.Findings-The model shows that firm's value would decrease with the uncertainty of equity financing,because of the relationship between firm's future cash and their financing policies.The numerical solution of the model suggests that the uncertainty of equity financing is one of the important factors affecting the choice of optimal capital structure,the greater the uncertainty is,the lower optimal capital structure is.Originality/value-The research of this paper has certain academic value for further understanding of the issues.

  7. Parsimony in personality: predicting sexual prejudice.

    Science.gov (United States)

    Miller, Audrey K; Wagner, Maverick M; Hunt, Amy N

    2012-01-01

    Extant research has established numerous demographic, personal-history, attitudinal, and ideological correlates of sexual prejudice, also known as homophobia. The present study investigated whether Five-Factor Model (FFM) personality domains, particularly Openness, and FFM facets, particularly Openness to Values, contribute independent and incremental variance to the prediction of sexual prejudice beyond these established correlates. Participants were 117 college students who completed a comprehensive FFM measure, measures of sexual prejudice, and a demographics, personal-history, and attitudes-and-ideologies questionnaire. Results of stepwise multiple regression analyses demonstrated that, whereas Openness domain score predicted only marginal incremental variance in sexual prejudice, Openness facet scores (particularly Openness to Values) predicted independent and substantial incremental variance beyond numerous other zero-order correlates of sexual prejudice. The importance of integrating FFM personality variables, especially facet-level variables, into conceptualizations of sexual prejudice is highlighted. Study strengths and weaknesses are discussed as are potential implications for prejudice-reduction interventions.

  8. Generalized Swept Mid-structure for Polygonal Models

    KAUST Repository

    Martin, Tobias

    2012-05-01

    We introduce a novel mid-structure called the generalized swept mid-structure (GSM) of a closed polygonal shape, and a framework to compute it. The GSM contains both curve and surface elements and has consistent sheet-by-sheet topology, versus triangle-by-triangle topology produced by other mid-structure methods. To obtain this structure, a harmonic function, defined on the volume that is enclosed by the surface, is used to decompose the volume into a set of slices. A technique for computing the 1D mid-structures of these slices is introduced. The mid-structures of adjacent slices are then iteratively matched through a boundary similarity computation and triangulated to form the GSM. This structure respects the topology of the input surface model is a hybrid mid-structure representation. The construction and topology of the GSM allows for local and global simplification, used in further applications such as parameterization, volumetric mesh generation and medical applications.

  9. Generalized Swept Mid-structure for Polygonal Models

    KAUST Repository

    Martin, Tobias; Chen, Guoning; Musuvathy, Suraj; Cohen, Elaine; Hansen, Charles

    2012-01-01

    We introduce a novel mid-structure called the generalized swept mid-structure (GSM) of a closed polygonal shape, and a framework to compute it. The GSM contains both curve and surface elements and has consistent sheet-by-sheet topology, versus triangle-by-triangle topology produced by other mid-structure methods. To obtain this structure, a harmonic function, defined on the volume that is enclosed by the surface, is used to decompose the volume into a set of slices. A technique for computing the 1D mid-structures of these slices is introduced. The mid-structures of adjacent slices are then iteratively matched through a boundary similarity computation and triangulated to form the GSM. This structure respects the topology of the input surface model is a hybrid mid-structure representation. The construction and topology of the GSM allows for local and global simplification, used in further applications such as parameterization, volumetric mesh generation and medical applications.

  10. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  11. A skeleton model for the MJO with refined vertical structure

    Science.gov (United States)

    Thual, Sulian; Majda, Andrew J.

    2016-05-01

    The Madden-Julian oscillation (MJO) is the dominant mode of variability in the tropical atmosphere on intraseasonal timescales and planetary spatial scales. The skeleton model is a minimal dynamical model that recovers robustly the most fundamental MJO features of (I) a slow eastward speed of roughly 5 {ms}^{-1}, (II) a peculiar dispersion relation with dω /dk ≈ 0, and (III) a horizontal quadrupole vortex structure. This model depicts the MJO as a neutrally-stable atmospheric wave that involves a simple multiscale interaction between planetary dry dynamics, planetary lower-tropospheric moisture and the planetary envelope of synoptic-scale activity. Here we propose and analyse an extended version of the skeleton model with additional variables accounting for the refined vertical structure of the MJO in nature. The present model reproduces qualitatively the front-to-rear vertical structure of the MJO found in nature, with MJO events marked by a planetary envelope of convective activity transitioning from the congestus to the deep to the stratiform type, in addition to a front-to-rear structure of moisture, winds and temperature. Despite its increased complexity the present model retains several interesting features of the original skeleton model such as a conserved energy and similar linear solutions. We further analyze a model version with a simple stochastic parametrization for the unresolved details of synoptic-scale activity. The stochastic model solutions show intermittent initiation, propagation and shut down of MJO wave trains, as in previous studies, in addition to MJO events with a front-to-rear vertical structure of varying intensity and characteristics from one event to another.

  12. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    Science.gov (United States)

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  13. Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control

    DEFF Research Database (Denmark)

    Zhou, Jianjun; Conrad, Finn

    1989-01-01

    The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input......-output behaviour of self-tuning controllers....

  14. Model-based active control of a continuous structure subjected to moving loads

    Science.gov (United States)

    Stancioiu, D.; Ouyang, H.

    2016-09-01

    Modelling of a structure is an important preliminary step of structural control. The main objectives of the modelling, which are almost always antagonistic are accuracy and simplicity of the model. The first part of this study focuses on the experimental and theoretical modelling of a structure subjected to the action of one or two decelerating moving carriages modelled as masses. The aim of this part is to obtain a simple but accurate model which will include not only the structure-moving load interaction but also the actuators dynamics. A small scale rig is designed to represent a four-span continuous metallic bridge structure with miniature guiding rails. A series of tests are run subjecting the structure to the action of one or two minicarriages with different loads that were launched along the structure at different initial speeds. The second part is dedicated to model based control design where a feedback controller is designed and tested against the validated model. The study shows that a positive position feedback is able to improve system dynamics but also shows some of the limitations of state- space methods for this type of system.

  15. Compactness in the Euler-lattice: A parsimonious pitch spelling model

    NARCIS (Netherlands)

    Honingh, A.K.

    2009-01-01

    Compactness and convexity have been shown to represent important principles in music, reflecting a notion of consonance in scales and chords, and have been successfully applied to well-known problems from music research. In this paper, the notion of compactness is applied to the problem of pitch

  16. Hierarchical modeling and its numerical implementation for layered thin elastic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)

    2017-05-15

    Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.

  17. Hidden multidimensional social structure modeling applied to biased social perception

    Science.gov (United States)

    Maletić, Slobodan; Zhao, Yi

    2018-02-01

    Intricacies of the structure of social relations are realized by representing a collection of overlapping opinions as a simplicial complex, thus building latent multidimensional structures, through which agents are, virtually, moving as they exchange opinions. The influence of opinion space structure on the distribution of opinions is demonstrated by modeling consensus phenomena when the opinion exchange between individuals may be affected by the false consensus effect. The results indicate that in the cases with and without bias, the road toward consensus is influenced by the structure of multidimensional space of opinions, and in the biased case, complete consensus is achieved. The applications of proposed modeling framework can easily be generalized, as they transcend opinion formation modeling.

  18. Testing strong factorial invariance using three-level structural equation modeling

    Directory of Open Access Journals (Sweden)

    Suzanne eJak

    2014-07-01

    Full Text Available Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias across groups. Although this approach is possible in principle, it is hardly practical when the number of groups is large or when the group size is relatively small. Jak, Oort and Dolan (2013 showed how strong factorial invariance across large numbers of groups can be tested in a multilevel structural equation modeling framework, by treating group as a random instead of a fixed variable. In the present study, this model is extended for use with three-level data. The proposed method is illustrated with an investigation of strong factorial invariance across 156 school classes and 50 schools in a Dutch dyscalculia test, using three-level structural equation modeling.

  19. Modeling of Triangular Lattice Space Structures with Curved Battens

    Science.gov (United States)

    Chen, Tzikang; Wang, John T.

    2005-01-01

    Techniques for simulating an assembly process of lattice structures with curved battens were developed. The shape of the curved battens, the tension in the diagonals, and the compression in the battens were predicted for the assembled model. To be able to perform the assembly simulation, a cable-pulley element was implemented, and geometrically nonlinear finite element analyses were performed. Three types of finite element models were created from assembled lattice structures for studying the effects of design and modeling variations on the load carrying capability. Discrepancies in the predictions from these models were discussed. The effects of diagonal constraint failure were also studied.

  20. The measurement of cyberbullying: dimensional structure and relative item severity and discrimination.

    Science.gov (United States)

    Menesini, Ersilia; Nocentini, Annalaura; Calussi, Pamela

    2011-05-01

    In relation to a sample of 1,092 Italian adolescents (50.9% females), the present study aims to: (a) analyze the most parsimonious structure of the cyberbullying and cybervictimization construct in male and female Italian adolescents through confirmatory factor analysis; and (b) analyze the severity and the discrimination parameters of each act using the item response theory. Results showed that the structure of the cyberbullying scale for perpetrated and received behaviors in both genders could best be represented by a monodimensional model where each item lies on a continuum of severity of aggressive acts. For both genders, the less severe acts are silent/prank calls and insults on instant messaging, and the most severe acts are unpleasant pictures/photos on Web sites, phone pictures/photos/videos of intimate scenes, and phone pictures/photos/videos of violent scenes. The items nasty text messages, nasty or rude e-mails, insults on Web sites, insults in chatrooms, and insults on blogs range from moderate to high levels of severity. Regarding the discrimination level of the acts, several items emerged as good indicators at various levels of cyberbullying and cybervictimization severity, with the exception of silent/prank calls. Furthermore, gender specificities underlined that the visual items can be considered good indicators of severe cyberbullies and cybervictims only in males. This information can help in understanding better the nature of the phenomenon, its severity in a given population, and to plan more specific prevention and intervention strategies.