WorldWideScience

Sample records for parsimonious structural model

  1. Parsimonious relevance models

    NARCIS (Netherlands)

    Meij, E.; Weerkamp, W.; Balog, K.; de Rijke, M.; Myang, S.-H.; Oard, D.W.; Sebastiani, F.; Chua, T.-S.; Leong, M.-K.

    2008-01-01

    We describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. We apply our method to six topic sets from test collections in five different genres. Our parsimonious relevance models (i) improve retrieval effectiveness in terms of

  2. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  3. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  4. Parsimonious Structural Equation Models for Repeated Measures Data, with Application to the Study of Consumer Preferences

    Science.gov (United States)

    Elrod, Terry; Haubl, Gerald; Tipps, Steven W.

    2012-01-01

    Recent research reflects a growing awareness of the value of using structural equation models to analyze repeated measures data. However, such data, particularly in the presence of covariates, often lead to models that either fit the data poorly, are exceedingly general and hard to interpret, or are specified in a manner that is highly data…

  5. A unifying model of genome evolution under parsimony.

    Science.gov (United States)

    Paten, Benedict; Zerbino, Daniel R; Hickey, Glenn; Haussler, David

    2014-06-19

    Parsimony and maximum likelihood methods of phylogenetic tree estimation and parsimony methods for genome rearrangements are central to the study of genome evolution yet to date they have largely been pursued in isolation. We present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. It conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join (DCJ) rearrangements in the presence of duplications. The problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. We show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and DCJ rearrangements needed to explain any history graph. These bounds become tight for a special type of unambiguous history graph called an ancestral variation graph (AVG), which constrains in its combinatorial structure the number of operations required. We finally demonstrate that for a given history graph G, a finite set of AVGs describe all parsimonious interpretations of G, and this set can be explored with a few sampling moves. This theoretical study describes a model in which the inference of genome rearrangements and phylogeny can be unified under parsimony.

  6. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  7. A simplified parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  8. A tridiagonal parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.

  9. Quality Assurance Based on Descriptive and Parsimonious Appearance Models

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Eiríksson, Eyþór Rúnar; Kristensen, Rasmus Lyngby

    2015-01-01

    In this positional paper, we discuss the potential benefits of using appearance models in additive manufacturing, metal casting, wind turbine blade production, and 3D content acquisition. Current state of the art in acquisition and rendering of appearance cannot easily be used for quality assurance...... in these areas. The common denominator is the need for descriptive and parsimonious appearance models. By ‘parsimonious’ we mean with few parameters so that a model is useful both for fast acquisition, robust fitting, and fast rendering of appearance. The word ‘descriptive’ refers to the fact that a model should...

  10. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  11. Pengintegrasian Model Leadership Menuju Model yang Lebih Komprhensip dan Parsimoni

    Directory of Open Access Journals (Sweden)

    Miswanto Miswanti

    2016-06-01

    Full Text Available ABTSRACT Through leadership models offered by Locke et. al (1991 we can say that whether good or not the vision of leaders in the organization is highly dependent on whether good or not the motives and traits, knowledge, skill, and abilities owned leaders. Then, good or not the implementation of the vision by the leader depends on whether good or not the motives and traits, knowledge, skills, abilities, and the vision of the leaders. Strategic Leadership written by Davies (1991 states that the implementation of the vision by using strategic leadership, the meaning is much more complete than what has been written by Locke et. al. in the fourth stage of leadership. Thus, aspects of the implementation of the vision by Locke et al (1991 it is not complete implementation of the vision according to Davies (1991. With the considerations mentioned above, this article attempts to combine the leadership model of the Locke et. al and strategic leadership of the Davies. With this modification is expected to be an improvement model of leadership is more comprehensive and parsimony.

  12. A parsimonious dynamic model for river water quality assessment.

    Science.gov (United States)

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.

  13. SEAPODYM-LTL: a parsimonious zooplankton dynamic biomass model

    Science.gov (United States)

    Conchon, Anna; Lehodey, Patrick; Gehlen, Marion; Titaud, Olivier; Senina, Inna; Séférian, Roland

    2017-04-01

    Mesozooplankton organisms are of critical importance for the understanding of early life history of most fish stocks, as well as the nutrient cycles in the ocean. Ongoing climate change and the need for improved approaches to the management of living marine resources has driven recent advances in zooplankton modelling. The classical modeling approach tends to describe the whole biogeochemical and plankton cycle with increasing complexity. We propose here a different and parsimonious zooplankton dynamic biomass model (SEAPODYM-LTL) that is cost efficient and can be advantageously coupled with primary production estimated either from satellite derived ocean color data or biogeochemical models. In addition, the adjoint code of the model is developed allowing a robust optimization approach for estimating the few parameters of the model. In this study, we run the first optimization experiments using a global database of climatological zooplankton biomass data and we make a comparative analysis to assess the importance of resolution and primary production inputs on model fit to observations. We also compare SEAPODYM-LTL outputs to those produced by a more complex biogeochemical model (PISCES) but sharing the same physical forcings.

  14. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  15. A new mathematical modeling for pure parsimony haplotyping problem.

    Science.gov (United States)

    Feizabadi, R; Bagherian, M; Vaziri, H R; Salahi, M

    2016-11-01

    Pure parsimony haplotyping (PPH) problem is important in bioinformatics because rational haplotyping inference plays important roles in analysis of genetic data, mapping complex genetic diseases such as Alzheimer's disease, heart disorders and etc. Haplotypes and genotypes are m-length sequences. Although several integer programing models have already been presented for PPH problem, its NP-hardness characteristic resulted in ineffectiveness of those models facing the real instances especially instances with many heterozygous sites. In this paper, we assign a corresponding number to each haplotype and genotype and based on those numbers, we set a mixed integer programing model. Using numbers, instead of sequences, would lead to less complexity of the new model in comparison with previous models in a way that there are neither constraints nor variables corresponding to heterozygous nucleotide sites in it. Experimental results approve the efficiency of the new model in producing better solution in comparison to two state-of-the art haplotyping approaches. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Parsimonious Hydrologic and Nitrate Response Models For Silver Springs, Florida

    Science.gov (United States)

    Klammler, Harald; Yaquian-Luna, Jose Antonio; Jawitz, James W.; Annable, Michael D.; Hatfield, Kirk

    2014-05-01

    Silver Springs with an approximate discharge of 25 m3/sec is one of Florida's first magnitude springs and among the largest springs worldwide. Its 2500-km2 springshed overlies the mostly unconfined Upper Floridan Aquifer. The aquifer is approximately 100 m thick and predominantly consists of porous, fractured and cavernous limestone, which leads to excellent surface drainage properties (no major stream network other than Silver Springs run) and complex groundwater flow patterns through both rock matrix and fast conduits. Over the past few decades, discharge from Silver Springs has been observed to slowly but continuously decline, while nitrate concentrations in the spring water have enormously increased from a background level of 0.05 mg/l to over 1 mg/l. In combination with concurrent increases in algae growth and turbidity, for example, and despite an otherwise relatively stable water quality, this has given rise to concerns about the ecological equilibrium in and near the spring run as well as possible impacts on tourism. The purpose of the present work is to elaborate parsimonious lumped parameter models that may be used by resource managers for evaluating the springshed's hydrologic and nitrate transport responses. Instead of attempting to explicitly consider the complex hydrogeologic features of the aquifer in a typically numerical and / or stochastic approach, we use a transfer function approach wherein input signals (i.e., time series of groundwater recharge and nitrate loading) are transformed into output signals (i.e., time series of spring discharge and spring nitrate concentrations) by some linear and time-invariant law. The dynamic response types and parameters are inferred from comparing input and output time series in frequency domain (e.g., after Fourier transformation). Results are converted into impulse (or step) response functions, which describe at what time and to what magnitude a unitary change in input manifests at the output. For the

  17. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  18. A parsimonious model for the proportional control valve

    OpenAIRE

    Elmer, KF; Gentle, CR

    2001-01-01

    A generic non-linear dynamic model of a direct-acting electrohydraulic proportional solenoid valve is presented. The valve consists of two subsystems-s-a spool assembly and one or two unidirectional proportional solenoids. These two subsystems are modelled separately. The solenoid is modelled as a non-linear resistor-inductor combination, with inductance parameters that change with current. An innovative modelling method has been used to represent these components. The spool assembly is model...

  19. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  20. Carbon-nitrogen-water interactions: is model parsimony fruitful?

    Science.gov (United States)

    Puertes, Cristina; González-Sanchis, María; Lidón, Antonio; Bautista, Inmaculada; Lull, Cristina; Francés, Félix

    2017-04-01

    It is well known that carbon and nitrogen cycles are highly intertwined and both should be explained through the water balance. In fact, in water-controlled ecosystems nutrient deficit is related to this water scarcity. For this reason, the present study compares the capability of three models in reproducing the interaction between the carbon and nitrogen cycles and the water cycle. The models are BIOME-BGCMuSo, LEACHM and a simple carbon-nitrogen model coupled to the hydrological model TETIS. Biome-BGCMuSo and LEACHM are two widely used models that reproduce the carbon and nitrogen cycles adequately. However, their main limitation is that these models are quite complex and can be too detailed for watershed studies. On the contrary, the TETIS nutrient sub-model is a conceptual model with a vertical tank distribution over the active soil depth, dividing it in two layers. Only the input of the added litter and the losses due to soil respiration, denitrification, leaching and plant uptake are considered as external fluxes. Other fluxes have been neglected. The three models have been implemented in an experimental plot of a semi-arid catchment (La Hunde, East of Spain), mostly covered by holm oak (Quercus ilex). Plant transpiration, soil moisture and runoff have been monitored daily during nearly two years (26/10/2012 to 30/09/2014). For the same period, soil samples were collected every two months and taken to the lab in order to obtain the concentrations of dissolved organic carbon, microbial biomass carbon, ammonium and nitrate. In addition, between field trips soil samples were placed in PVC tubes with resin traps and were left incubating (in situ buried cores). Thus, mineralization and nitrification accumulated fluxes for two months, were obtained. The ammonium and nitrate leaching accumulated for two months were measured using ion-exchange resin cores. Soil respiration was also measured every field trip. Finally, water samples deriving from runoff, were collected

  1. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.

  2. The plunge in German electricity futures prices – Analysis using a parsimonious fundamental model

    International Nuclear Information System (INIS)

    Kallabis, Thomas; Pape, Christian; Weber, Christoph

    2016-01-01

    The German market has seen a plunge in wholesale electricity prices from 2007 until 2014, with base futures prices dropping by more than 40%. This is frequently attributed to the unexpected high increase in renewable power generation. Using a parsimonious fundamental model, we determine the respective impact of supply and demand shocks on electricity futures prices. The used methodology is based on a piecewise linear approximation of the supply stack and time-varying price-inelastic demand. This parsimonious model is able to replicate electricity futures prices and discover non-linear dependencies in futures price formation. We show that emission prices have a higher impact on power prices than renewable penetration. Changes in renewables, demand and installed capacities turn out to be similarly important for explaining the decrease in operation margins of conventional power plants. We thus argue for the establishment of an independent authority to stabilize emission prices. - Highlights: •We build a parsimonious fundamental model based on a piecewise linear bid stack. •We use the model to investigate impact factors for the plunge in German futures prices. •Largest impact by CO_2 price developments followed by demand and renewable feed-in. •Power plant operating profits strongly affected by demand and renewables. •We argue that stabilizing CO_2 emission prices could provide better market signals.

  3. Assessing Internet addiction using the parsimonious Internet addiction components model - a preliminary study [forthcoming

    OpenAIRE

    Kuss, DJ; Shorter, GW; Van Rooij, AJ; Griffiths, MD; Schoenmakers, T

    2014-01-01

    Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), i...

  4. Bayesian, Maximum Parsimony and UPGMA Models for Inferring the Phylogenies of Antelopes Using Mitochondrial Markers

    OpenAIRE

    Khan, Haseeb A.; Arif, Ibrahim A.; Bahkali, Ali H.; Al Farhan, Ahmad H.; Al Homaidan, Ali A.

    2008-01-01

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to B...

  5. A class representative model for Pure Parsimony Haplotyping under uncertain data.

    Directory of Open Access Journals (Sweden)

    Daniele Catanzaro

    Full Text Available The Pure Parsimony Haplotyping (PPH problem is a NP-hard combinatorial optimization problem that consists of finding the minimum number of haplotypes necessary to explain a given set of genotypes. PPH has attracted more and more attention in recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from mapping complex disease genes to inferring population histories, passing through designing drugs, functional genomics and pharmacogenetics. In this article we investigate, for the first time, a recent version of PPH called the Pure Parsimony Haplotype problem under Uncertain Data (PPH-UD. This version mainly arises when the input genotypes are not accurate, i.e., when some single nucleotide polymorphisms are missing or affected by errors. We propose an exact approach to solution of PPH-UD based on an extended version of Catanzaro et al.[1] class representative model for PPH, currently the state-of-the-art integer programming model for PPH. The model is efficient, accurate, compact, polynomial-sized, easy to implement, solvable with any solver for mixed integer programming, and usable in all those cases for which the parsimony criterion is well suited for haplotype estimation.

  6. Beyond technology acceptance to effective technology use: a parsimonious and actionable model.

    Science.gov (United States)

    Holahan, Patricia J; Lesselroth, Blake J; Adams, Kathleen; Wang, Kai; Church, Victoria

    2015-05-01

    To develop and test a parsimonious and actionable model of effective technology use (ETU). Cross-sectional survey of primary care providers (n = 53) in a large integrated health care organization that recently implemented new medication reconciliation technology. Surveys assessed 5 technology-related perceptions (compatibility with work values, implementation climate, compatibility with work processes, perceived usefulness, and ease of use) and 1 outcome variable, ETU. ETU was measured as both consistency and quality of technology use. Compatibility with work values and implementation climate were found to have differential effects on consistency and quality of use. When implementation climate was strong, consistency of technology use was high. However, quality of technology use was high only when implementation climate was strong and values compatibility was high. This is an important finding and highlights the importance of users' workplace values as a key determinant of quality of use. To extend our effectiveness in implementing new health care information technology, we need parsimonious models that include actionable determinants of ETU and account for the differential effects of these determinants on the multiple dimensions of ETU. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Urban micro-scale flood risk estimation with parsimonious hydraulic modelling and census data

    Directory of Open Access Journals (Sweden)

    C. Arrighi

    2013-05-01

    Full Text Available The adoption of 2007/60/EC Directive requires European countries to implement flood hazard and flood risk maps by the end of 2013. Flood risk is the product of flood hazard, vulnerability and exposure, all three to be estimated with comparable level of accuracy. The route to flood risk assessment is consequently much more than hydraulic modelling of inundation, that is hazard mapping. While hazard maps have already been implemented in many countries, quantitative damage and risk maps are still at a preliminary level. A parsimonious quasi-2-D hydraulic model is here adopted, having many advantages in terms of easy set-up. It is here evaluated as being accurate in flood depth estimation in urban areas with a high-resolution and up-to-date Digital Surface Model (DSM. The accuracy, estimated by comparison with marble-plate records of a historic flood in the city of Florence, is characterized in the downtown's most flooded area by a bias of a very few centimetres and a determination coefficient of 0.73. The average risk is found to be about 14 € m−2 yr−1, corresponding to about 8.3% of residents' income. The spatial distribution of estimated risk highlights a complex interaction between the flood pattern and the building characteristics. As a final example application, the estimated risk values have been used to compare different retrofitting measures. Proceeding through the risk estimation steps, a new micro-scale potential damage assessment method is proposed. This is based on the georeferenced census system as the optimal compromise between spatial detail and open availability of socio-economic data. The results of flood risk assessment at the census section scale resolve most of the risk spatial variability, and they can be easily aggregated to whatever upper scale is needed given that they are geographically defined as contiguous polygons. Damage is calculated through stage–damage curves, starting from census data on building type and

  8. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    Science.gov (United States)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  9. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    Science.gov (United States)

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  10. Modeling the isotopic evolution of snowpack and snowmelt: Testing a spatially distributed parsimonious approach.

    Science.gov (United States)

    Ala-Aho, Pertti; Tetzlaff, Doerthe; McNamara, James P; Laudon, Hjalmar; Kormos, Patrick; Soulsby, Chris

    2017-07-01

    Use of stable water isotopes has become increasingly popular in quantifying water flow paths and travel times in hydrological systems using tracer-aided modeling. In snow-influenced catchments, snowmelt produces a traceable isotopic signal, which differs from original snowfall isotopic composition because of isotopic fractionation in the snowpack. These fractionation processes in snow are relatively well understood, but representing their spatiotemporal variability in tracer-aided studies remains a challenge. We present a novel, parsimonious modeling method to account for the snowpack isotope fractionation and estimate isotope ratios in snowmelt water in a fully spatially distributed manner. Our model introduces two calibration parameters that alone account for the isotopic fractionation caused by sublimation from interception and ground snow storage, and snowmelt fractionation progressively enriching the snowmelt runoff. The isotope routines are linked to a generic process-based snow interception-accumulation-melt model facilitating simulation of spatially distributed snowmelt runoff. We use a synthetic modeling experiment to demonstrate the functionality of the model algorithms in different landscape locations and under different canopy characteristics. We also provide a proof-of-concept model test and successfully reproduce isotopic ratios in snowmelt runoff sampled with snowmelt lysimeters in two long-term experimental catchment with contrasting winter conditions. To our knowledge, the method is the first such tool to allow estimation of the spatially distributed nature of isotopic fractionation in snowpacks and the resulting isotope ratios in snowmelt runoff. The method can thus provide a useful tool for tracer-aided modeling to better understand the integrated nature of flow, mixing, and transport processes in snow-influenced catchments.

  11. Bayesian, maximum parsimony and UPGMA models for inferring the phylogenies of antelopes using mitochondrial markers.

    Science.gov (United States)

    Khan, Haseeb A; Arif, Ibrahim A; Bahkali, Ali H; Al Farhan, Ahmad H; Al Homaidan, Ali A

    2008-10-06

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65%) followed by cyt-b (94.22%) and d-loop (87.29%). There were few transitions (2.35%) and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions) and d-loop (11.57% transitions and 1.14% transversions) while comparing the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx) to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella) with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.

  12. Parsimonious model for blood glucose level monitoring in type 2 diabetes patients.

    Science.gov (United States)

    Zhao, Fang; Ma, Yan Fen; Wen, Jing Xiao; DU, Yan Fang; Li, Chun Lin; Li, Guang Wei

    2014-07-01

    To establish the parsimonious model for blood glucose monitoring in patients with type 2 diabetes receiving oral hypoglycemic agent treatment. One hundred and fifty-nine adult Chinese type 2 diabetes patients were randomized to receive rapid-acting or sustained-release gliclazide therapy for 12 weeks. Their blood glucose levels were measured at 10 time points in a 24 h period before and after treatment, and the 24 h mean blood glucose levels were measured. Contribution of blood glucose levels to the mean blood glucose level and HbA1c was assessed by multiple regression analysis. The correlation coefficients of blood glucose level measured at 10 time points to the daily MBG were 0.58-0.74 and 0.59-0.79, respectively, before and after treatment (Pblood glucose levels measured at 6 of the 10 time points could explain 95% and 97% of the changes in MBG before and after treatment. The three blood glucose levels, which were measured at fasting, 2 h after breakfast and before dinner, of the 10 time points could explain 84% and 86% of the changes in MBG before and after treatment, but could only explain 36% and 26% of the changes in HbA1c before and after treatment, and they had a poorer correlation with the HbA1c than with the 24 h MBG. The blood glucose levels measured at fasting, 2 h after breakfast and before dinner truly reflected the change 24 h blood glucose level, suggesting that they are appropriate for the self-monitoring of blood glucose levels in diabetes patients receiving oral anti-diabetes therapy. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  13. Parsimonious Surface Wave Interferometry

    KAUST Repository

    Li, Jing

    2017-10-24

    To decrease the recording time of a 2D seismic survey from a few days to one hour or less, we present a parsimonious surface-wave interferometry method. Interferometry allows for the creation of a large number of virtual shot gathers from just two reciprocal shot gathers by crosscoherence of trace pairs, where the virtual surface waves can be inverted for the S-wave velocity model by wave-equation dispersion inversion (WD). Synthetic and field data tests suggest that parsimonious wave-equation dispersion inversion (PWD) gives S-velocity tomograms that are comparable to those obtained from a full survey with a shot at each receiver. The limitation of PWD is that the virtual data lose some information so that the resolution of the S-velocity tomogram can be modestly lower than that of the S-velocity tomogram inverted from a conventional survey.

  14. Parsimonious Surface Wave Interferometry

    KAUST Repository

    Li, Jing; Hanafy, Sherif; Schuster, Gerard T.

    2017-01-01

    To decrease the recording time of a 2D seismic survey from a few days to one hour or less, we present a parsimonious surface-wave interferometry method. Interferometry allows for the creation of a large number of virtual shot gathers from just two reciprocal shot gathers by crosscoherence of trace pairs, where the virtual surface waves can be inverted for the S-wave velocity model by wave-equation dispersion inversion (WD). Synthetic and field data tests suggest that parsimonious wave-equation dispersion inversion (PWD) gives S-velocity tomograms that are comparable to those obtained from a full survey with a shot at each receiver. The limitation of PWD is that the virtual data lose some information so that the resolution of the S-velocity tomogram can be modestly lower than that of the S-velocity tomogram inverted from a conventional survey.

  15. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments

    Directory of Open Access Journals (Sweden)

    H. Roux

    2011-09-01

    Full Text Available A spatially distributed hydrological model, dedicated to flood simulation, is developed on the basis of physical process representation (infiltration, overland flow, channel routing. Estimation of model parameters requires data concerning topography, soil properties, vegetation and land use. Four parameters are calibrated for the entire catchment using one flood event. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations. Results of this sensitivity analysis with a criterion based on the Nash efficiency coefficient and the error of peak time and runoff are used to calibrate the model. This procedure is tested on the Gardon d'Anduze catchment, located in the Mediterranean zone of southern France. A first validation is conducted using three flood events with different hydrometeorological characteristics. This sensitivity analysis along with validation tests illustrates the predictive capability of the model and points out the possible improvements on the model's structure and parameterization for flash flood forecasting, especially in ungauged basins. Concerning the model structure, results show that water transfer through the subsurface zone also contributes to the hydrograph response to an extreme event, especially during the recession period. Maps of soil saturation emphasize the impact of rainfall and soil properties variability on these dynamics. Adding a subsurface flow component in the simulation also greatly impacts the spatial distribution of soil saturation and shows the importance of the drainage network. Measures of such distributed variables would help discriminating between different possible model structures.

  16. Prediction of dissolved reactive phosphorus losses from small agricultural catchments: calibration and validation of a parsimonious model

    Directory of Open Access Journals (Sweden)

    C. Hahn

    2013-10-01

    Full Text Available Eutrophication of surface waters due to diffuse phosphorus (P losses continues to be a severe water quality problem worldwide, causing the loss of ecosystem functions of the respective water bodies. Phosphorus in runoff often originates from a small fraction of a catchment only. Targeting mitigation measures to these critical source areas (CSAs is expected to be most efficient and cost-effective, but requires suitable tools. Here we investigated the capability of the parsimonious Rainfall-Runoff-Phosphorus (RRP model to identify CSAs in grassland-dominated catchments based on readily available soil and topographic data. After simultaneous calibration on runoff data from four small hilly catchments on the Swiss Plateau, the model was validated on a different catchment in the same region without further calibration. The RRP model adequately simulated the discharge and dissolved reactive P (DRP export from the validation catchment. Sensitivity analysis showed that the model predictions were robust with respect to the classification of soils into "poorly drained" and "well drained", based on the available soil map. Comparing spatial hydrological model predictions with field data from the validation catchment provided further evidence that the assumptions underlying the model are valid and that the model adequately accounts for the dominant P export processes in the target region. Thus, the parsimonious RRP model is a valuable tool that can be used to determine CSAs. Despite the considerable predictive uncertainty regarding the spatial extent of CSAs, the RRP can provide guidance for the implementation of mitigation measures. The model helps to identify those parts of a catchment where high DRP losses are expected or can be excluded with high confidence. Legacy P was predicted to be the dominant source for DRP losses and thus, in combination with hydrologic active areas, a high risk for water quality.

  17. Dirichlet Process Parsimonious Mixtures for clustering

    OpenAIRE

    Chamroukhi, Faicel; Bartcus, Marius; Glotin, Hervé

    2015-01-01

    The parsimonious Gaussian mixture models, which exploit an eigenvalue decomposition of the group covariance matrices of the Gaussian mixture, have shown their success in particular in cluster analysis. Their estimation is in general performed by maximum likelihood estimation and has also been considered from a parametric Bayesian prospective. We propose new Dirichlet Process Parsimonious mixtures (DPPM) which represent a Bayesian nonparametric formulation of these parsimonious Gaussian mixtur...

  18. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  19. Flood modelling with a distributed event-based parsimonious rainfall-runoff model: case of the karstic Lez river catchment

    Directory of Open Access Journals (Sweden)

    M. Coustau

    2012-04-01

    Full Text Available Rainfall-runoff models are crucial tools for the statistical prediction of flash floods and real-time forecasting. This paper focuses on a karstic basin in the South of France and proposes a distributed parsimonious event-based rainfall-runoff model, coherent with the poor knowledge of both evaporative and underground fluxes. The model combines a SCS runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The efficiency of the model is discussed not only to satisfactorily simulate floods but also to get powerful relationships between the initial condition of the model and various predictors of the initial wetness state of the basin, such as the base flow, the Hu2 index from the Meteo-France SIM model and the piezometric levels of the aquifer. The advantage of using meteorological radar rainfall in flood modelling is also assessed. Model calibration proved to be satisfactory by using an hourly time step with Nash criterion values, ranging between 0.66 and 0.94 for eighteen of the twenty-one selected events. The radar rainfall inputs significantly improved the simulations or the assessment of the initial condition of the model for 5 events at the beginning of autumn, mostly in September–October (mean improvement of Nash is 0.09; correction in the initial condition ranges from −205 to 124 mm, but were less efficient for the events at the end of autumn. In this period, the weak vertical extension of the precipitation system and the low altitude of the 0 °C isotherm could affect the efficiency of radar measurements due to the distance between the basin and the radar (~60 km. The model initial condition S is correlated with the three tested predictors (R2 > 0.6. The interpretation of the model suggests that groundwater does not affect the first peaks of the flood, but can strongly impact subsequent peaks in the case of a multi-storm event. Because this kind of model is based on a limited

  20. Parsimonious refraction interferometry

    KAUST Repository

    Hanafy, Sherif

    2016-09-06

    We present parsimonious refraction interferometry where a densely populated refraction data set can be obtained from just two shot gathers. The assumptions are that the first arrivals are comprised of head waves and direct waves, and a pair of reciprocal shot gathers is recorded over the line of interest. The refraction traveltimes from these reciprocal shot gathers can be picked and decomposed into O(N2) refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. This enormous increase in the number of virtual traveltime picks and associated rays, compared to the 2N traveltimes from the two reciprocal shot gathers, allows for increased model resolution and better condition numbers in the normal equations. Also, a reciprocal survey is far less time consuming than a standard refraction survey with a dense distribution of sources.

  1. Parsimonious refraction interferometry

    KAUST Repository

    Hanafy, Sherif; Schuster, Gerard T.

    2016-01-01

    We present parsimonious refraction interferometry where a densely populated refraction data set can be obtained from just two shot gathers. The assumptions are that the first arrivals are comprised of head waves and direct waves, and a pair of reciprocal shot gathers is recorded over the line of interest. The refraction traveltimes from these reciprocal shot gathers can be picked and decomposed into O(N2) refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. This enormous increase in the number of virtual traveltime picks and associated rays, compared to the 2N traveltimes from the two reciprocal shot gathers, allows for increased model resolution and better condition numbers in the normal equations. Also, a reciprocal survey is far less time consuming than a standard refraction survey with a dense distribution of sources.

  2. Between Complexity and Parsimony: Can Agent-Based Modelling Resolve the Trade-off

    DEFF Research Database (Denmark)

    Nielsen, Helle Ørsted; Malawska, Anna Katarzyna

    2013-01-01

    to BR- based policy studies would be to couple research on bounded ra-tionality with agent-based modeling. Agent-based models (ABMs) are computational models for simulating the behavior and interactions of any number of decision makers in a dynamic system. Agent-based models are better suited than...... are general equilibrium models for capturing behavior patterns of complex systems. ABMs may have the potential to represent complex systems without oversimplifying them. At the same time, research in bounded rationality and behavioral economics has already yielded many insights that could inform the modeling......While Herbert Simon espoused development of general models of behavior, he also strongly advo-cated that these models be based on realistic assumptions about humans and therefore reflect the complexity of human cognition and social systems (Simon 1997). Hence, the model of bounded rationality...

  3. A Parsimonious Model of the Rabbit Action Potential Elucidates the Minimal Physiological Requirements for Alternans and Spiral Wave Breakup.

    Science.gov (United States)

    Gray, Richard A; Pathmanathan, Pras

    2016-10-01

    Elucidating the underlying mechanisms of fatal cardiac arrhythmias requires a tight integration of electrophysiological experiments, models, and theory. Existing models of transmembrane action potential (AP) are complex (resulting in over parameterization) and varied (leading to dissimilar predictions). Thus, simpler models are needed to elucidate the "minimal physiological requirements" to reproduce significant observable phenomena using as few parameters as possible. Moreover, models have been derived from experimental studies from a variety of species under a range of environmental conditions (for example, all existing rabbit AP models incorporate a formulation of the rapid sodium current, INa, based on 30 year old data from chick embryo cell aggregates). Here we develop a simple "parsimonious" rabbit AP model that is mathematically identifiable (i.e., not over parameterized) by combining a novel Hodgkin-Huxley formulation of INa with a phenomenological model of repolarization similar to the voltage dependent, time-independent rectifying outward potassium current (IK). The model was calibrated using the following experimental data sets measured from the same species (rabbit) under physiological conditions: dynamic current-voltage (I-V) relationships during the AP upstroke; rapid recovery of AP excitability during the relative refractory period; and steady-state INa inactivation via voltage clamp. Simulations reproduced several important "emergent" phenomena including cellular alternans at rates > 250 bpm as observed in rabbit myocytes, reentrant spiral waves as observed on the surface of the rabbit heart, and spiral wave breakup. Model variants were studied which elucidated the minimal requirements for alternans and spiral wave break up, namely the kinetics of INa inactivation and the non-linear rectification of IK.The simplicity of the model, and the fact that its parameters have physiological meaning, make it ideal for engendering generalizable mechanistic

  4. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  5. Dynamics of pesticide uptake into plants: From system functioning to parsimonious modeling

    DEFF Research Database (Denmark)

    Fantke, Peter; Wieland, Peter; Wannaz, Cedric

    2013-01-01

    Dynamic plant uptake models are suitable for assessing environmental fate and behavior of toxic chemicals in food crops. However, existing tools mostly lack in-depth analysis of system dynamics. Furthermore, no existing model is available as parameterized version that is easily applicable for use...

  6. Where and why hyporheic exchange is important: Inferences from a parsimonious, physically-based river network model

    Science.gov (United States)

    Gomez-Velez, J. D.; Harvey, J. W.

    2014-12-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.

  7. Introduction to the special issue: parsimony and redundancy in models of language.

    Science.gov (United States)

    Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian

    2013-09-01

    One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;

  8. Enhancement of a parsimonious water balance model to simulate surface hydrology in a glacierized watershed

    Science.gov (United States)

    Valentin, Melissa M.; Viger, Roland J.; Van Beusekom, Ashley E.; Hay, Lauren E.; Hogue, Terri S.; Foks, Nathan Leon

    2018-01-01

    The U.S. Geological Survey monthly water balance model (MWBM) was enhanced with the capability to simulate glaciers in order to make it more suitable for simulating cold region hydrology. The new model, MWBMglacier, is demonstrated in the heavily glacierized and ecologically important Copper River watershed in Southcentral Alaska. Simulated water budget components compared well to satellite‐based observations and ground measurements of streamflow, evapotranspiration, snow extent, and total water storage, with differences ranging from 0.2% to 7% of the precipitation flux. Nash Sutcliffe efficiency for simulated and observed streamflow was greater than 0.8 for six of eight stream gages. Snow extent matched satellite‐based observations with Nash Sutcliffe efficiency values of greater than 0.89 in the four Copper River ecoregions represented. During the simulation period 1949 to 2009, glacier ice melt contributed 25% of total runoff, ranging from 12% to 45% in different tributaries, and glacierized area was reduced by 6%. Statistically significant (p < 0.05) decreasing and increasing trends in annual glacier mass balance occurred during the multidecade cool and warm phases of the Pacific Decadal Oscillation, respectively, reinforcing the link between climate perturbations and glacier mass balance change. The simulations of glaciers and total runoff for a large, remote region of Alaska provide useful data to evaluate hydrologic, cryospheric, ecologic, and climatic trends. MWBM glacier is a valuable tool to understand when, and to what extent, streamflow may increase or decrease as glaciers respond to a changing climate.

  9. Hydrologic behaviour of the Lake of Monate (Italy): a parsimonious modelling strategy

    Science.gov (United States)

    Tomesani, Giulia; Soligno, Irene; Castellarin, Attilio; Baratti, Emanuele; Cervi, Federico; Montanari, Alberto

    2016-04-01

    The Lake of Monate (province of Varese, Northern Italy), is a unique example of ecosystem in equilibrium. The lake water quality is deemed excellent notwithstanding the intensive agricultural cultivation, industrial assets and mining activities characterising the surrounding areas. The lake has a true touristic vocation and is the only swimmable water body of the province of Varese, which counts several natural lakes. Lake of Monate has no tributary and its overall watershed area is equal to c.a. 6.6 km2 including the lake surface (i.e. 2.6 km2), of which 3.3 out of c.a. 4.0 km2 belong to the topographical watershed, while the remaining 0.7 km2 belong to the underground watershed. The latter is larger than the topographical watershed due to the presence of moraine formations on top of the limestone bedrock. The local administration recently promoted an intensive environmental monitoring campaign that aims to reach a better understanding of the hydrology of the lake and the subsurface water fluxes. The monitoring campaign started in October 2013 and, as a result, several meteoclimatic and hydrologic data have been collected up to now at daily and hourly timescales. Our study focuses on a preliminary representation of the hydrological behaviour of the lake through a modified version of HyMOD, a conceptual 5-parameter lumped rainfall-runoff model based on the probability-distributed soil storage capacity. The modified model is a semi-distributed application of HyMOD that uses the same five parameters of the original version and simulates the rainfall-runoff transformation for the whole lake watershed at daily time scale in terms of: direct precipitation on, and evaporation from, the lake surface; overall lake inflow, by separating the runoff component (topographic watershed) from the groundwater component (overall watershed); lake water-level oscillation; streamflow at the lake outlet. We used the first year of hydrometeorological observations as calibration data and

  10. Evapotranspiration estimation using a parameter-parsimonious energy partition model over Amazon basin

    Science.gov (United States)

    Xu, D.; Agee, E.; Wang, J.; Ivanov, V. Y.

    2017-12-01

    The increased frequency and severity of droughts in the Amazon region have emphasized the potential vulnerability of the rainforests to heat and drought-induced stresses, highlighting the need to reduce the uncertainty in estimates of regional evapotranspiration (ET) and quantify resilience of the forest. Ground-based observations for estimating ET are resource intensive, making methods based on remotely sensed observations an attractive alternative. Several methodologies have been developed to estimate ET from satellite data, but challenges remained in model parameterization and satellite limited coverage reducing their utility for monitoring biodiverse regions. In this work, we apply a novel surface energy partition method (Maximum Entropy Production; MEP) based on Bayesian probability theory and nonequilibrium thermodynamics to derive ET time series using satellite data for Amazon basin. For a large, sparsely monitored region such as the Amazon, this approach has the advantage methods of only using single level measurements of net radiation, temperature, and specific humidity data. Furthermore, it is not sensitive to the uncertainty of the input data and model parameters. In this first application of MEP theory for a tropical forest biome, we assess its performance at various spatiotemporal scales against a diverse field data sets. Specifically, the objective of this work is to test this method using eddy flux data for several locations across the Amazonia at sub-daily, monthly, and annual scales and compare the new estimates with those using traditional methods. Analyses of the derived ET time series will contribute to reducing the current knowledge gap surrounding the much debated response of the Amazon Basin region to droughts and offer a template for monitoring the long-term changes in global hydrologic cycle due to anthropogenic and natural causes.

  11. Catchment legacies and time lags: a parsimonious watershed model to predict the effects of legacy storage on nitrogen export.

    Directory of Open Access Journals (Sweden)

    Kimberly J Van Meter

    Full Text Available Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy and groundwater travel time distributions (hydrologic legacy. The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.

  12. The transboundary non-renewable Nubian Aquifer System of Chad, Egypt, Libya and Sudan: classical groundwater questions and parsimonious hydrogeologic analysis and modeling

    Science.gov (United States)

    Voss, Clifford I.; Soliman, Safaa M.

    2014-03-01

    Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world's largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.

  13. The Feeding Practices and Structure Questionnaire (FPSQ-28): A parsimonious version validated for longitudinal use from 2 to 5 years.

    Science.gov (United States)

    Jansen, Elena; Williams, Kate E; Mallan, Kimberley M; Nicholson, Jan M; Daniels, Lynne A

    2016-05-01

    Prospective studies and intervention evaluations that examine change over time assume that measurement tools measure the same construct at each occasion. In the area of parent-child feeding practices, longitudinal measurement properties of the questionnaires used are rarely verified. To ascertain that measured change in feeding practices reflects true change rather than change in the assessment, structure, or conceptualisation of the constructs over time, this study examined longitudinal measurement invariance of the Feeding Practices and Structure Questionnaire (FPSQ) subscales (9 constructs; 40 items) across 3 time points. Mothers participating in the NOURISH trial reported their feeding practices when children were aged 2, 3.7, and 5 years (N = 404). Confirmatory Factor Analysis (CFA) within a structural equation modelling framework was used. Comparisons of initial cross-sectional models followed by longitudinal modelling of subscales, resulted in the removal of 12 items, including two redundant or poorly performing subscales. The resulting 28-item FPSQ-28 comprised 7 multi-item subscales: Reward for Behaviour, Reward for Eating, Persuasive Feeding, Overt Restriction, Covert Restriction, Structured Meal Setting and Structured Meal Timing. All subscales showed good fit over 3 time points and each displayed at least partial scalar (thresholds equal) longitudinal measurement invariance. We recommend the use of a separate single item indicator to assess the family meal setting. This is the first study to examine longitudinal measurement invariance in a feeding practices questionnaire. Invariance was established, indicating that the subscales of the shortened FPSQ-28 can be used with mothers to validly assess change in 7 feeding constructs in samples of children aged 2-5 years of age. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Parsimonious Refraction Interferometry and Tomography

    KAUST Repository

    Hanafy, Sherif

    2017-02-04

    We present parsimonious refraction interferometry and tomography where a densely populated refraction data set can be obtained from two reciprocal and several infill shot gathers. The assumptions are that the refraction arrivals are head waves, and a pair of reciprocal shot gathers and several infill shot gathers are recorded over the line of interest. Refraction traveltimes from these shot gathers are picked and spawned into O(N2) virtual refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. The virtual traveltimes can be inverted to give the velocity tomogram. This enormous increase in the number of traveltime picks and associated rays, compared to the many fewer traveltimes from the reciprocal and infill shot gathers, allows for increased model resolution and a better condition number with the system of normal equations. A significant benefit is that the parsimonious survey and the associated traveltime picking is far less time consuming than that for a standard refraction survey with a dense distribution of sources.

  15. A mixed integer linear programming model to reconstruct phylogenies from single nucleotide polymorphism haplotypes under the maximum parsimony criterion

    Science.gov (United States)

    2013-01-01

    Background Phylogeny estimation from aligned haplotype sequences has attracted more and more attention in the recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from medical research, to drug discovery, to epidemiology, to population dynamics. The literature on molecular phylogenetics proposes a number of criteria for selecting a phylogeny from among plausible alternatives. Usually, such criteria can be expressed by means of objective functions, and the phylogenies that optimize them are referred to as optimal. One of the most important estimation criteria is the parsimony which states that the optimal phylogeny T∗for a set H of n haplotype sequences over a common set of variable loci is the one that satisfies the following requirements: (i) it has the shortest length and (ii) it is such that, for each pair of distinct haplotypes hi,hj∈H, the sum of the edge weights belonging to the path from hi to hj in T∗ is not smaller than the observed number of changes between hi and hj. Finding the most parsimonious phylogeny for H involves solving an optimization problem, called the Most Parsimonious Phylogeny Estimation Problem (MPPEP), which is NP-hard in many of its versions. Results In this article we investigate a recent version of the MPPEP that arises when input data consist of single nucleotide polymorphism haplotypes extracted from a population of individuals on a common genomic region. Specifically, we explore the prospects for improving on the implicit enumeration strategy of implicit enumeration strategy used in previous work using a novel problem formulation and a series of strengthening valid inequalities and preliminary symmetry breaking constraints to more precisely bound the solution space and accelerate implicit enumeration of possible optimal phylogenies. We present the basic formulation and then introduce a series of provable valid constraints to reduce the solution space. We then prove

  16. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    Science.gov (United States)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  17. Parsimonious Refraction Interferometry and Tomography

    KAUST Repository

    Hanafy, Sherif; Schuster, Gerard T.

    2017-01-01

    We present parsimonious refraction interferometry and tomography where a densely populated refraction data set can be obtained from two reciprocal and several infill shot gathers. The assumptions are that the refraction arrivals are head waves

  18. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  19. Seeking parsimony in hydrology and water resources technology

    Science.gov (United States)

    Koutsoyiannis, D.

    2009-04-01

    The principle of parsimony, also known as the principle of simplicity, the principle of economy and Ockham's razor, advises scientists to prefer the simplest theory among those that fit the data equally well. In this, it is an epistemic principle but reflects an ontological characterization that the universe is ultimately parsimonious. Is this principle useful and can it really be reconciled with, and implemented to, our modelling approaches of complex hydrological systems, whose elements and events are extraordinarily numerous, different and unique? The answer underlying the mainstream hydrological research of the last two decades seems to be negative. Hopes were invested to the power of computers that would enable faithful and detailed representation of the diverse system elements and the hydrological processes, based on merely "first principles" and resulting in "physically-based" models that tend to approach in complexity the real world systems. Today the account of such research endeavour seems not positive, as it did not improve model predictive capacity and processes comprehension. A return to parsimonious modelling seems to be again the promising route. The experience from recent research and from comparisons of parsimonious and complicated models indicates that the former can facilitate insight and comprehension, improve accuracy and predictive capacity, and increase efficiency. In addition - and despite aspiration that "physically based" models will have lower data requirements and, even, they ultimately become "data-free" - parsimonious models require fewer data to achieve the same accuracy with more complicated models. Naturally, the concepts that reconcile the simplicity of parsimonious models with the complexity of hydrological systems are probability theory and statistics. Probability theory provides the theoretical basis for moving from a microscopic to a macroscopic view of phenomena, by mapping sets of diverse elements and events of hydrological

  20. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  1. Reconstructing phylogenetic networks using maximum parsimony.

    Science.gov (United States)

    Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John

    2005-01-01

    Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.

  2. Parsimonious Ways to Use Vision for Navigation

    Directory of Open Access Journals (Sweden)

    Paul Graham

    2012-05-01

    Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.

  3. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  4. Regularized Estimation of Structural Instability in Factor Models: The US Macroeconomy and the Great Moderation

    DEFF Research Database (Denmark)

    Callot, Laurent; Kristensen, Johannes Tang

    This paper shows that the parsimoniously time-varying methodology of Callot and Kristensen (2015) can be applied to factor models.We apply this method to study macroeconomic instability in the US from 1959:1 to 2006:4 with a particular focus on the Great Moderation. Models with parsimoniously time...... that the parameters of both models exhibit a higher degree of instability in the period from 1970:1 to 1984:4 relative to the following 15 years. In our setting the Great Moderation appears as the gradual ending of a period of high structural instability that took place in the 1970s and early 1980s....

  5. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  6. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  7. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    Science.gov (United States)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  8. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  9. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  10. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  11. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  12. Efficient parsimony-based methods for phylogenetic network reconstruction.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-15

    Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.

  13. Principle of Parsimony, Fake Science, and Scales

    Science.gov (United States)

    Yeh, T. C. J.; Wan, L.; Wang, X. S.

    2017-12-01

    Considering difficulties in predicting exact motions of water molecules, and the scale of our interests (bulk behaviors of many molecules), Fick's law (diffusion concept) has been created to predict solute diffusion process in space and time. G.I. Taylor (1921) demonstrated that random motion of the molecules reach the Fickian regime in less a second if our sampling scale is large enough to reach ergodic condition. Fick's law is widely accepted for describing molecular diffusion as such. This fits the definition of the parsimony principle at the scale of our concern. Similarly, advection-dispersion or convection-dispersion equation (ADE or CDE) has been found quite satisfactory for analysis of concentration breakthroughs of solute transport in uniformly packed soil columns. This is attributed to the solute is often released over the entire cross-section of the column, which has sampled many pore-scale heterogeneities and met the ergodicity assumption. Further, the uniformly packed column contains a large number of stationary pore-size heterogeneity. The solute thus reaches the Fickian regime after traveling a short distance along the column. Moreover, breakthrough curves are concentrations integrated over the column cross-section (the scale of our interest), and they meet the ergodicity assumption embedded in the ADE and CDE. To the contrary, scales of heterogeneity in most groundwater pollution problems evolve as contaminants travel. They are much larger than the scale of our observations and our interests so that the ergodic and the Fickian conditions are difficult. Upscaling the Fick's law for solution dispersion, and deriving universal rules of the dispersion to the field- or basin-scale pollution migrations are merely misuse of the parsimony principle and lead to a fake science ( i.e., the development of theories for predicting processes that can not be observed.) The appropriate principle of parsimony for these situations dictates mapping of large

  14. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  15. Seeing the elephant: Parsimony, functionalism, and the emergent design of contempt and other sentiments.

    Science.gov (United States)

    Gervais, Matthew M; Fessler, Daniel M T

    2017-01-01

    The target article argues that contempt is a sentiment, and that sentiments are the deep structure of social affect. The 26 commentaries meet these claims with a range of exciting extensions and applications, as well as critiques. Most significantly, we reply that construction and emergence are necessary for, not incompatible with, evolved design, while parsimony requires explanatory adequacy and predictive accuracy, not mere simplicity.

  16. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  17. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  18. Time-Dependent-Asymmetric-Linear-Parsimonious Ancestral State Reconstruction.

    Science.gov (United States)

    Didier, Gilles

    2017-10-01

    The time-dependent-asymmetric-linear parsimony is an ancestral state reconstruction method which extends the standard linear parsimony (a.k.a. Wagner parsimony) approach by taking into account both branch lengths and asymmetric evolutionary costs for reconstructing quantitative characters (asymmetric costs amount to assuming an evolutionary trend toward the direction with the lowest cost). A formal study of the influence of the asymmetry parameter shows that the time-dependent-asymmetric-linear parsimony infers states which are all taken among the known states, except for some degenerate cases corresponding to special values of the asymmetry parameter. This remarkable property holds in particular for the Wagner parsimony. This study leads to a polynomial algorithm which determines, and provides a compact representation of, the parametric reconstruction of a phylogenetic tree, that is for all the unknown nodes, the set of all the possible reconstructed states associated with the asymmetry parameters leading to them. The time-dependent-asymmetric-linear parsimony is finally illustrated with the parametric reconstruction of the body size of cetaceans.

  19. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  20. FPGA Hardware Acceleration of a Phylogenetic Tree Reconstruction with Maximum Parsimony Algorithm

    OpenAIRE

    BLOCK, Henry; MARUYAMA, Tsutomu

    2017-01-01

    In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel an...

  1. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...

  2. Assessing internet addiction using the parsimonious internet addiction components model—A preliminary study.

    OpenAIRE

    Kuss, D.J.; Shorter, G.W.; Rooij, A.J. van; Griffiths, M.D.; Schoenmakers, T.M.

    2014-01-01

    Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (Journal ...

  3. Philosophy and phylogenetic inference: a comparison of likelihood and parsimony methods in the context of Karl Popper's writings on corroboration.

    Science.gov (United States)

    de Queiroz, K; Poe, S

    2001-06-01

    Advocates of cladistic parsimony methods have invoked the philosophy of Karl Popper in an attempt to argue for the superiority of those methods over phylogenetic methods based on Ronald Fisher's statistical principle of likelihood. We argue that the concept of likelihood in general, and its application to problems of phylogenetic inference in particular, are highly compatible with Popper's philosophy. Examination of Popper's writings reveals that his concept of corroboration is, in fact, based on likelihood. Moreover, because probabilistic assumptions are necessary for calculating the probabilities that define Popper's corroboration, likelihood methods of phylogenetic inference--with their explicit probabilistic basis--are easily reconciled with his concept. In contrast, cladistic parsimony methods, at least as described by certain advocates of those methods, are less easily reconciled with Popper's concept of corroboration. If those methods are interpreted as lacking probabilistic assumptions, then they are incompatible with corroboration. Conversely, if parsimony methods are to be considered compatible with corroboration, then they must be interpreted as carrying implicit probabilistic assumptions. Thus, the non-probabilistic interpretation of cladistic parsimony favored by some advocates of those methods is contradicted by an attempt by the same authors to justify parsimony methods in terms of Popper's concept of corroboration. In addition to being compatible with Popperian corroboration, the likelihood approach to phylogenetic inference permits researchers to test the assumptions of their analytical methods (models) in a way that is consistent with Popper's ideas about the provisional nature of background knowledge.

  4. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  5. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  6. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  7. Parsimony in personality: predicting sexual prejudice.

    Science.gov (United States)

    Miller, Audrey K; Wagner, Maverick M; Hunt, Amy N

    2012-01-01

    Extant research has established numerous demographic, personal-history, attitudinal, and ideological correlates of sexual prejudice, also known as homophobia. The present study investigated whether Five-Factor Model (FFM) personality domains, particularly Openness, and FFM facets, particularly Openness to Values, contribute independent and incremental variance to the prediction of sexual prejudice beyond these established correlates. Participants were 117 college students who completed a comprehensive FFM measure, measures of sexual prejudice, and a demographics, personal-history, and attitudes-and-ideologies questionnaire. Results of stepwise multiple regression analyses demonstrated that, whereas Openness domain score predicted only marginal incremental variance in sexual prejudice, Openness facet scores (particularly Openness to Values) predicted independent and substantial incremental variance beyond numerous other zero-order correlates of sexual prejudice. The importance of integrating FFM personality variables, especially facet-level variables, into conceptualizations of sexual prejudice is highlighted. Study strengths and weaknesses are discussed as are potential implications for prejudice-reduction interventions.

  8. Metallic glasses: structural models

    International Nuclear Information System (INIS)

    Nassif, E.

    1984-01-01

    The aim of this work is to give a summary of the attempts made up to the present in order to discribe by structural models the atomic arrangement in metallic glasses, showing also why the structure factors and atomic distribution functions cannot be always experimentally determined with a reasonable accuracy. (M.W.O.) [pt

  9. Live phylogeny with polytomies: Finding the most compact parsimonious trees.

    Science.gov (United States)

    Papamichail, D; Huang, A; Kennedy, E; Ott, J-L; Miller, A; Papamichail, G

    2017-08-01

    Construction of phylogenetic trees has traditionally focused on binary trees where all species appear on leaves, a problem for which numerous efficient solutions have been developed. Certain application domains though, such as viral evolution and transmission, paleontology, linguistics, and phylogenetic stemmatics, often require phylogeny inference that involves placing input species on ancestral tree nodes (live phylogeny), and polytomies. These requirements, despite their prevalence, lead to computationally harder algorithmic solutions and have been sparsely examined in the literature to date. In this article we prove some unique properties of most parsimonious live phylogenetic trees with polytomies, and their mapping to traditional binary phylogenetic trees. We show that our problem reduces to finding the most compact parsimonious tree for n species, and describe a novel efficient algorithm to find such trees without resorting to exhaustive enumeration of all possible tree topologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Structural Equation Model Trees

    Science.gov (United States)

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  11. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  12. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data : An illustration with health-related quality of life data from thirteen measurement occasions

    NARCIS (Netherlands)

    Verdam, M.G.E.; Oort, F.J.

    2014-01-01

    Highlights: - Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. - A method for the investigation of measurement bias with Kronecker product restricted models. - Application of these methods to health-related quality of life data

  13. ECONGAS - model structure

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ``Internationalization and structural change in the gas market`` aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab.

  14. ECONGAS - model structure

    International Nuclear Information System (INIS)

    1997-01-01

    This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ''Internationalization and structural change in the gas market'' aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab

  15. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  16. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  17. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  18. Failed refutations: further comments on parsimony and likelihood methods and their relationship to Popper's degree of corroboration.

    Science.gov (United States)

    de Queiroz, Kevin; Poe, Steven

    2003-06-01

    Kluge's (2001, Syst. Biol. 50:322-330) continued arguments that phylogenetic methods based on the statistical principle of likelihood are incompatible with the philosophy of science described by Karl Popper are based on false premises related to Kluge's misrepresentations of Popper's philosophy. Contrary to Kluge's conjectures, likelihood methods are not inherently verificationist; they do not treat every instance of a hypothesis as confirmation of that hypothesis. The historical nature of phylogeny does not preclude phylogenetic hypotheses from being evaluated using the probability of evidence. The low absolute probabilities of hypotheses are irrelevant to the correct interpretation of Popper's concept termed degree of corroboration, which is defined entirely in terms of relative probabilities. Popper did not advocate minimizing background knowledge; in any case, the background knowledge of both parsimony and likelihood methods consists of the general assumption of descent with modification and additional assumptions that are deterministic, concerning which tree is considered most highly corroborated. Although parsimony methods do not assume (in the sense of entailing) that homoplasy is rare, they do assume (in the sense of requiring to obtain a correct phylogenetic inference) certain things about patterns of homoplasy. Both parsimony and likelihood methods assume (in the sense of implying by the manner in which they operate) various things about evolutionary processes, although violation of those assumptions does not always cause the methods to yield incorrect phylogenetic inferences. Test severity is increased by sampling additional relevant characters rather than by character reanalysis, although either interpretation is compatible with the use of phylogenetic likelihood methods. Neither parsimony nor likelihood methods assess test severity (critical evidence) when used to identify a most highly corroborated tree(s) based on a single method or model and a

  19. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  20. A large version of the small parsimony problem

    DEFF Research Database (Denmark)

    Fredslund, Jakob; Hein, Jotun; Scharling, Tejs

    2003-01-01

    the most parsimonious assignment of nucleotides. The gaps of the alignment are represented in a so-called gap graph, and through theoretically sound preprocessing the graph is reduced to pave the way for a running time which in all but the most pathological examples is far better than the exponential worst......Given a multiple alignment over $k$ sequences, an evolutionary tree relating the sequences, and a subadditive gap penalty function (e.g. an affine function), we reconstruct the internal nodes of the tree optimally: we find the optimal explanation in terms of indels of the observed gaps and find...... case time. E.g. for a tree with nine leaves and a random alignment of length 10.000 with 60% gaps, the running time is on average around 45 seconds. For a real alignment of length 9868 of nine HIV-1 sequences, the running time is less than one second....

  1. Things fall apart: biological species form unconnected parsimony networks.

    Science.gov (United States)

    Hart, Michael W; Sunday, Jennifer

    2007-10-22

    The generality of operational species definitions is limited by problematic definitions of between-species divergence. A recent phylogenetic species concept based on a simple objective measure of statistically significant genetic differentiation uses between-species application of statistical parsimony networks that are typically used for population genetic analysis within species. Here we review recent phylogeographic studies and reanalyse several mtDNA barcoding studies using this method. We found that (i) alignments of DNA sequences typically fall apart into a separate subnetwork for each Linnean species (but with a higher rate of true positives for mtDNA data) and (ii) DNA sequences from single species typically stick together in a single haplotype network. Departures from these patterns are usually consistent with hybridization or cryptic species diversity.

  2. On the Accuracy of Ancestral Sequence Reconstruction for Ultrametric Trees with Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2018-04-01

    We examine a mathematical question concerning the reconstruction accuracy of the Fitch algorithm for reconstructing the ancestral sequence of the most recent common ancestor given a phylogenetic tree and sequence data for all taxa under consideration. In particular, for the symmetric four-state substitution model which is also known as Jukes-Cantor model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that for any ultrametric phylogenetic tree and a symmetric model, the Fitch parsimony method using all terminal taxa is more accurate, or at least as accurate, for ancestral state reconstruction than using any particular terminal taxon or any particular pair of taxa. This conjecture had so far only been answered for two-state data by Fischer and Thatte. Here, we focus on answering the biologically more relevant case with four states, which corresponds to ancestral sequence reconstruction from DNA or RNA data.

  3. A structural model of the dimensions of teacher stress.

    Science.gov (United States)

    Boyle, G J; Borg, M G; Falzon, J M; Baglioni, A J

    1995-03-01

    A comprehensive survey of teacher stress, job satisfaction and career commitment among 710 full-time primary school teachers was undertaken by Borg, Riding & Falzon (1991) in the Mediterranean islands of Malta and Gozo. A principal components analysis of a 20-item sources of teacher stress inventory had suggested four distinct dimensions which were labelled: Pupil Misbehaviour, Time/Resource Difficulties, Professional Recognition Needs, and Poor Relationships, respectively. To check on the validity of the Borg et al. factor solution, the group of 710 teachers was randomly split into two separate samples. Exploratory factor analysis was carried out on the data from Sample 1 (N = 335), while Sample 2 (N = 375) provided the cross-validational data for a LISREL confirmatory factor analysis. Results supported the proposed dimensionality of the sources of teacher stress (measurement model), along with evidence of an additional teacher stress factor (Workload). Consequently, structural modelling of the 'causal relationships' between the various latent variables and self-reported stress was undertaken on the combined samples (N = 710). Although both non-recursive and recursive models incorporating Poor Colleague Relations as a mediating variable were tested for their goodness-of-fit, a simple regression model provided the most parsimonious fit to the empirical data, wherein Workload and Student Misbehaviour accounted for most of the variance in predicting teaching stress.

  4. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. PRODUCT STRUCTURE DIGITAL MODEL

    Directory of Open Access Journals (Sweden)

    V.M. Sineglazov

    2005-02-01

    Full Text Available  Research results of representation of product structure made by means of CADDS5 computer-aided design (CAD system, Product Data Management Optegra (PDM system and Product Life Cycle Management Wind-chill system (PLM, are examined in this work. Analysis of structure component development and its storage in various systems is carried out. Algorithms of structure transformation required for correct representation of the structure are considered. Management analysis of electronic mockup presentation of the product structure is carried out for Windchill system.

  6. Integrated materials–structural models

    DEFF Research Database (Denmark)

    Stang, Henrik; Geiker, Mette Rica

    2008-01-01

    , repair works and strengthening methods for structures. A very significant part of the infrastructure consists of reinforced concrete structures. Even though reinforced concrete structures typically are very competitive, certain concrete structures suffer from various types of degradation. A framework...... should define a framework in which materials research results eventually should fit in and on the other side the materials research should define needs and capabilities in structural modelling. Integrated materials-structural models of a general nature are almost non-existent in the field of cement based...

  7. Modeling Structural Brain Connectivity

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø

    The human brain consists of a gigantic complex network of interconnected neurons. Together all these connections determine who we are, how we react and how we interpret the world. Knowledge about how the brain is connected can further our understanding of the brain’s structural organization, help...... improve diagnosis, and potentially allow better treatment of a wide range of neurological disorders. Tractography based on diffusion magnetic resonance imaging is a unique tool to estimate this “structural connectivity” of the brain non-invasively and in vivo. During the last decade, brain connectivity...... has increasingly been analyzed using graph theoretic measures adopted from network science and this characterization of the brain’s structural connectivity has been shown to be useful for the classification of populations, such as healthy and diseased subjects. The structural connectivity of the brain...

  8. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  9. Oscillating water column structural model

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.

  10. Structural dynamic modifications via models

    Indian Academy of Sciences (India)

    The study shows that as many as half of the matrix ... the dynamicist's analytical modelling skill which would appear both in the numerator as. Figure 2. ..... Brandon J A 1990 Strategies for structural dynamic modification (New York: John Wiley).

  11. Structure-Based Turbulence Model

    National Research Council Canada - National Science Library

    Reynolds, W

    2000-01-01

    .... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...

  12. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... Publication: www.jcss.ethz.ch; 2001] and of the COST action E24 ‘Reliability of Timber Structures' [COST Action E 24, Reliability of timber structures. Several meetings and Publications, Internet Publication: http://www.km.fgg.uni-lj.si/coste24/coste24.htm; 2005]. The present proposal is based on discussions...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...

  13. Temporal structures in shell models

    DEFF Research Database (Denmark)

    Okkels, F.

    2001-01-01

    The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...

  14. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  15. Balancing practicality and hydrologic realism: a parsimonious approach for simulating rapid groundwater recharge via unsaturated-zone preferential flow

    Science.gov (United States)

    Mirus, Benjamin B.; Nimmo, J.R.

    2013-01-01

    The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.

  16. Fatgraph models of RNA structure

    Directory of Open Access Journals (Sweden)

    Huang Fenix

    2017-01-01

    Full Text Available In this review paper we discuss fatgraphs as a conceptual framework for RNA structures. We discuss various notions of coarse-grained RNA structures and relate them to fatgraphs.We motivate and discuss the main intuition behind the fatgraph model and showcase its applicability to canonical as well as noncanonical base pairs. Recent discoveries regarding novel recursions of pseudoknotted (pk configurations as well as their translation into context-free grammars for pk-structures are discussed. This is shown to allow for extending the concept of partition functions of sequences w.r.t. a fixed structure having non-crossing arcs to pk-structures. We discuss minimum free energy folding of pk-structures and combine these above results outlining how to obtain an inverse folding algorithm for PK structures.

  17. Handbook of structural equation modeling

    CERN Document Server

    Hoyle, Rick H

    2012-01-01

    The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu

  18. Probabilistic Modeling of Timber Structures

    DEFF Research Database (Denmark)

    Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2005-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...... proposal is based on discussions and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for components and connections. The recommended...

  19. Parsimonious data: How a single Facebook like predicts voting behavior in multiparty systems.

    Directory of Open Access Journals (Sweden)

    Jakob Bæk Kristensen

    Full Text Available This study shows how liking politicians' public Facebook posts can be used as an accurate measure for predicting present-day voter intention in a multiparty system. We highlight that a few, but selective digital traces produce prediction accuracies that are on par or even greater than most current approaches based upon bigger and broader datasets. Combining the online and offline, we connect a subsample of surveyed respondents to their public Facebook activity and apply machine learning classifiers to explore the link between their political liking behaviour and actual voting intention. Through this work, we show that even a single selective Facebook like can reveal as much about political voter intention as hundreds of heterogeneous likes. Further, by including the entire political like history of the respondents, our model reaches prediction accuracies above previous multiparty studies (60-70%. The main contribution of this paper is to show how public like-activity on Facebook allows political profiling of individual users in a multiparty system with accuracies above previous studies. Beside increased accuracies, the paper shows how such parsimonious measures allows us to generalize our findings to the entire population of a country and even across national borders, to other political multiparty systems. The approach in this study relies on data that are publicly available, and the simple setup we propose can with some limitations, be generalized to millions of users in other multiparty systems.

  20. Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.

    Science.gov (United States)

    Mirzaei, Sajad; Wu, Yufeng

    2016-01-01

    Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.

  1. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    Science.gov (United States)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  2. Track structure in biological models.

    Science.gov (United States)

    Curtis, S B

    1986-01-01

    High-energy heavy ions in the galactic cosmic radiation (HZE particles) may pose a special risk during long term manned space flights outside the sheltering confines of the earth's geomagnetic field. These particles are highly ionizing, and they and their nuclear secondaries can penetrate many centimeters of body tissue. The three dimensional patterns of ionizations they create as they lose energy are referred to as their track structure. Several models of biological action on mammalian cells attempt to treat track structure or related quantities in their formulation. The methods by which they do this are reviewed. The proximity function is introduced in connection with the theory of Dual Radiation Action (DRA). The ion-gamma kill (IGK) model introduces the radial energy-density distribution, which is a smooth function characterizing both the magnitude and extension of a charged particle track. The lethal, potentially lethal (LPL) model introduces lambda, the mean distance between relevant ion clusters or biochemical species along the track. Since very localized energy depositions (within approximately 10 nm) are emphasized, the proximity function as defined in the DRA model is not of utility in characterizing track structure in the LPL formulation.

  3. Structure and modeling of turbulence

    International Nuclear Information System (INIS)

    Novikov, E.A.

    1995-01-01

    The open-quotes vortex stringsclose quotes scale l s ∼ LRe -3/10 (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES)

  4. A model for adatom structures

    Science.gov (United States)

    Kappus, W.

    1981-06-01

    A model concerning adatom structures is proposed. Attractive nearest neighbour interactions, which may be of electronic nature lead to 2-dimensional condensation. Every pair bond causes and elastic dipole. The elastic dipoles interact via substrate strains with an anisotropic s -3 power law. Different types of adatoms or sites are permitted and many-body effects result, from the assumptions. Electric dipole interactions of adatoms are included for comparison. The model is applied to the W(110) surface and compared with superstructures experimentally found in the W(110)-0 system. It is found that there is still lack for an additional next-nearest neighbour interaction.

  5. Consequence Valuing as Operation and Process: A Parsimonious Analysis of Motivation

    Science.gov (United States)

    Whelan, Robert; Barnes-Holmes, Dermot

    2010-01-01

    The concept of the motivating operation (MO) has been subject to 3 criticisms: (a) the terms and concepts employed do not always overlap with traditional behavior-analytic verbal practices; (b) the dual nature of the MO is unclear; and (c) there is a lack of adequate contact with empirical data. We offer a more parsimonious approach to motivation,…

  6. A Parsimonious Instrument for Predicting Students' Intent to Pursue a Sales Career: Scale Development and Validation

    Science.gov (United States)

    Peltier, James W.; Cummins, Shannon; Pomirleanu, Nadia; Cross, James; Simon, Rob

    2014-01-01

    Students' desire and intention to pursue a career in sales continue to lag behind industry demand for sales professionals. This article develops and validates a reliable and parsimonious scale for measuring and predicting student intention to pursue a selling career. The instrument advances previous scales in three ways. The instrument is…

  7. Vector Autoregressions with Parsimoniously Time Varying Parameters and an Application to Monetary Policy

    DEFF Research Database (Denmark)

    Callot, Laurent; Kristensen, Johannes Tang

    the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule.We document substantial changes in the policy response of the Fed in the 1970s and 1980s, and since 2007, but also document the stability of this response...

  8. Time-Lapse Monitoring of Subsurface Fluid Flow using Parsimonious Seismic Interferometry

    KAUST Repository

    Hanafy, Sherif; Li, Jing; Schuster, Gerard T.

    2017-01-01

    of parsimonious seismic interferometry with the time-lapse mentoring idea with field examples, where we were able to record 30 different data sets within a 2-hour period. The recorded data are then processed to generate 30 snapshots that shows the spread of water

  9. Parsimonious wave-equation travel-time inversion for refraction waves

    KAUST Repository

    Fu, Lei; Hanafy, Sherif M.; Schuster, Gerard T.

    2017-01-01

    We present a parsimonious wave-equation travel-time inversion technique for refraction waves. A dense virtual refraction dataset can be generated from just two reciprocal shot gathers for the sources at the endpoints of the survey line, with N

  10. Application of parsimonious learning feedforward control to mechatronic systems

    NARCIS (Netherlands)

    de Vries, Theodorus J.A.; Velthuis, W.J.R.; Idema, L.J.

    2001-01-01

    For motion control, learning feedforward controllers (LFFCs) should be applied when accurate process modelling is difficult. When controlling such processes with LFFCs in the form of multidimensional B-spline networks, large network sizes and a poor generalising ability may result, known as the

  11. Prediction of traffic-related nitrogen oxides concentrations using Structural Time-Series models

    Science.gov (United States)

    Lawson, Anneka Ruth; Ghosh, Bidisha; Broderick, Brian

    2011-09-01

    Ambient air quality monitoring, modeling and compliance to the standards set by European Union (EU) directives and World Health Organization (WHO) guidelines are required to ensure the protection of human and environmental health. Congested urban areas are most susceptible to traffic-related air pollution which is the most problematic source of air pollution in Ireland. Long-term continuous real-time monitoring of ambient air quality at such urban centers is essential but often not realistic due to financial and operational constraints. Hence, the development of a resource-conservative ambient air quality monitoring technique is essential to ensure compliance with the threshold values set by the standards. As an intelligent and advanced statistical methodology, a Structural Time Series (STS) based approach has been introduced in this paper to develop a parsimonious and computationally simple air quality model. In STS methodology, the different components of a time-series dataset such as the trend, seasonal, cyclical and calendar variations can be modeled separately. To test the effectiveness of the proposed modeling strategy, average hourly concentrations of nitrogen dioxide and nitrogen oxides from a congested urban arterial in Dublin city center were modeled using STS methodology. The prediction error estimates from the developed air quality model indicate that the STS model can be a useful tool in predicting nitrogen dioxide and nitrogen oxides concentrations in urban areas and will be particularly useful in situations where the information on external variables such as meteorology or traffic volume is not available.

  12. Parsimonious evaluation of concentric-tube continuum robot equilibrium conformation.

    Science.gov (United States)

    Rucker, Daniel Caleb; Webster Iii, Robert J

    2009-09-01

    Dexterous at small diameters, continuum robots consisting of precurved concentric tubes are well-suited for minimally invasive surgery. These active cannulas are actuated by relative translations and rotations applied at the tube bases, which create bending via elastic tube interaction. An accurate kinematic model of cannula shape is required for applications in surgical and other settings. Previous models are limited to circular tube precurvatures, and neglect torsional deformation in curved sections. Recent generalizations account for arbitrary tube preshaping and bending and torsion throughout the cannula, providing differential equations that define cannula shape. In this paper, we show how to simplify these equations using Frenet-Serret frames. An advantage of this approach is the interpretation of torsional components of the preset tube shapes as "forcing functions" on the cannula's differential equations. We also elucidate a process for numerically solving the differential equations, and use it to produce simulations illustrating the implications of torsional deformation and helical tube shapes.

  13. Using genes as characters and a parsimony analysis to explore the phylogenetic position of turtles.

    Directory of Open Access Journals (Sweden)

    Bin Lu

    Full Text Available The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a "genes as characters" approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis, similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as

  14. Inferring phylogenetic networks by the maximum parsimony criterion: a case study.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-01

    Horizontal gene transfer (HGT) may result in genes whose evolutionary histories disagree with each other, as well as with the species tree. In this case, reconciling the species and gene trees results in a network of relationships, known as the "phylogenetic network" of the set of species. A phylogenetic network that incorporates HGT consists of an underlying species tree that captures vertical inheritance and a set of edges which model the "horizontal" transfer of genetic material. In a series of papers, Nakhleh and colleagues have recently formulated a maximum parsimony (MP) criterion for phylogenetic networks, provided an array of computationally efficient algorithms and heuristics for computing it, and demonstrated its plausibility on simulated data. In this article, we study the performance and robustness of this criterion on biological data. Our findings indicate that MP is very promising when its application is extended to the domain of phylogenetic network reconstruction and HGT detection. In all cases we investigated, the MP criterion detected the correct number of HGT events required to map the evolutionary history of a gene data set onto the species phylogeny. Furthermore, our results indicate that the criterion is robust with respect to both incomplete taxon sampling and the use of different site substitution matrices. Finally, our results show that the MP criterion is very promising in detecting HGT in chimeric genes, whose evolutionary histories are a mix of vertical and horizontal evolution. Besides the performance analysis of MP, our findings offer new insights into the evolution of 4 biological data sets and new possible explanations of HGT scenarios in their evolutionary history.

  15. Using Genes as Characters and a Parsimony Analysis to Explore the Phylogenetic Position of Turtles

    Science.gov (United States)

    Lu, Bin; Yang, Weizhao; Dai, Qiang; Fu, Jinzhong

    2013-01-01

    The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a “genes as characters” approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog) and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara) but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis), similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as characters

  16. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  17. Kinematic models of extensional structures

    International Nuclear Information System (INIS)

    Groshong, R.H. Jr.

    1990-01-01

    This paper discusses kinematic models that can relate faults of different types and different positions within a single dynamic system and thereby offer the potential to explain the disparate seismic activity characteristic of extensional terrains. The major styles are full grabens, half grabens, domino blocks, and glide-block systems. Half grabens, the most likely models for Basin and Range structure, are formed above a master fault of decreasing dip with depth and a hangingwall that deforms as it passes over the curved fault. Second-order normal faults, typically domino style, accommodate the required hangingwall deformation. According to the author low-angle detachment faults are consistent with the evidence of seismicity only on high-angle faults if the hangingwall of the detachment is broken by multiple half-graben systems

  18. Testing Models of Psychopathology in Preschool-aged Children Using a Structured Interview-based Assessment

    Science.gov (United States)

    Dougherty, Lea R.; Bufferd, Sara J.; Carlson, Gabrielle A.; Klein, Daniel N.

    2014-01-01

    A number of studies have found that broadband internalizing and externalizing factors provide a parsimonious framework for understanding the structure of psychopathology across childhood, adolescence, and adulthood. However, few of these studies have examined psychopathology in young children, and several recent studies have found support for alternative models, including a bi-factor model with common and specific factors. The present study used parents’ (typically mothers’) reports on a diagnostic interview in a community sample of 3-year old children (n=541; 53.9 % male) to compare the internalizing-externalizing latent factor model with a bi-factor model. The bi-factor model provided a better fit to the data. To test the concurrent validity of this solution, we examined associations between this model and paternal reports and laboratory observations of child temperament. The internalizing factor was associated with low levels of surgency and high levels of fear; the externalizing factor was associated with high levels of surgency and disinhibition and low levels of effortful control; and the common factor was associated with high levels of surgency and negative affect and low levels of effortful control. These results suggest that psychopathology in preschool-aged children may be explained by a single, common factor influencing nearly all disorders and unique internalizing and externalizing factors. These findings indicate that shared variance across internalizing and externalizing domains is substantial and are consistent with recent suggestions that emotion regulation difficulties may be a common vulnerability for a wide array of psychopathology. PMID:24652485

  19. Assessing Credit with Equity : A CEV Model with Jump to Default

    NARCIS (Netherlands)

    Campi, L.; Polbennikov, S.Y.; Sbuelz, A.

    2005-01-01

    Unlike in structural and reduced-form models, we use equity as a liquid and observable primitive to analytically value corporate bonds and credit default swaps.Restrictive assumptions on the .rm.s capital structure are avoided.Default is parsimoniously represented by equity value hitting the zero

  20. Continuous-Time Semi-Markov Models in Health Economic Decision Making : An Illustrative Example in Heart Failure Disease Management

    NARCIS (Netherlands)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease

  1. Timing of birth: Parsimony favors strategic over dysregulated parturition.

    Science.gov (United States)

    Catalano, Ralph; Goodman, Julia; Margerison-Zilko, Claire; Falconi, April; Gemmill, Alison; Karasek, Deborah; Anderson, Elizabeth

    2016-01-01

    The "dysregulated parturition" narrative posits that the human stress response includes a cascade of hormones that "dysregulates" and accelerates parturition but provides questionable utility as a guide to understand or prevent preterm birth. We offer and test a "strategic parturition" narrative that not only predicts the excess preterm births that dysregulated parturition predicts but also makes testable, sex-specific predictions of the effect of stressful environments on the timing of birth among term pregnancies. We use interrupted time-series modeling of cohorts conceived over 101 months to test for lengthening of early term male gestations in stressed population. We use an event widely reported to have stressed Americans and to have increased the incidence of low birth weight and fetal death across the country-the terrorist attacks of September 2001. We tested the hypothesis that the odds of male infants conceived in December 2000 (i.e., at term in September 2001) being born early as opposed to full term fell below the value expected from those conceived in the 50 prior and 50 following months. We found that term male gestations exposed to the terrorist attacks exhibited 4% lower likelihood of early, as opposed to full or late, term birth. Strategic parturition explains observed data for which the dysregulated parturition narrative offers no prediction-the timing of birth among gestations stressed at term. Our narrative may help explain why findings from studies examining associations between population- and/or individual-level stressors and preterm birth are generally mixed. © 2015 Wiley Periodicals, Inc.

  2. Soil Retaining Structures : Development of models for structural analysis

    NARCIS (Netherlands)

    Bakker, K.J.

    2000-01-01

    The topic of this thesis is the development of models for the structural analysis of soil retaining structures. The soil retaining structures being looked at are; block revetments, flexible retaining walls and bored tunnels in soft soil. Within this context typical structural behavior of these

  3. Innovative Bayesian and Parsimony Phylogeny of Dung Beetles (Coleoptera, Scarabaeidae, Scarabaeinae) Enhanced by Ontology-Based Partitioning of Morphological Characters

    Science.gov (United States)

    Tarasov, Sergei; Génier, François

    2015-01-01

    Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a

  4. Assessment of Genetic Heterogeneity in Structured Plant Populations Using Multivariate Whole-Genome Regression Models.

    Science.gov (United States)

    Lehermeier, Christina; Schön, Chris-Carolin; de Los Campos, Gustavo

    2015-09-01

    Plant breeding populations exhibit varying levels of structure and admixture; these features are likely to induce heterogeneity of marker effects across subpopulations. Traditionally, structure has been dealt with as a potential confounder, and various methods exist to "correct" for population stratification. However, these methods induce a mean correction that does not account for heterogeneity of marker effects. The animal breeding literature offers a few recent studies that consider modeling genetic heterogeneity in multibreed data, using multivariate models. However, these methods have received little attention in plant breeding where population structure can have different forms. In this article we address the problem of analyzing data from heterogeneous plant breeding populations, using three approaches: (a) a model that ignores population structure [A-genome-based best linear unbiased prediction (A-GBLUP)], (b) a stratified (i.e., within-group) analysis (W-GBLUP), and (c) a multivariate approach that uses multigroup data and accounts for heterogeneity (MG-GBLUP). The performance of the three models was assessed on three different data sets: a diversity panel of rice (Oryza sativa), a maize (Zea mays L.) half-sib panel, and a wheat (Triticum aestivum L.) data set that originated from plant breeding programs. The estimated genomic correlations between subpopulations varied from null to moderate, depending on the genetic distance between subpopulations and traits. Our assessment of prediction accuracy features cases where ignoring population structure leads to a parsimonious more powerful model as well as others where the multivariate and stratified approaches have higher predictive power. In general, the multivariate approach appeared slightly more robust than either the A- or the W-GBLUP. Copyright © 2015 by the Genetics Society of America.

  5. DupTree: a program for large-scale phylogenetic analyses using gene tree parsimony.

    Science.gov (United States)

    Wehe, André; Bansal, Mukul S; Burleigh, J Gordon; Eulenstein, Oliver

    2008-07-01

    DupTree is a new software program for inferring rooted species trees from collections of gene trees using the gene tree parsimony approach. The program implements a novel algorithm that significantly improves upon the run time of standard search heuristics for gene tree parsimony, and enables the first truly genome-scale phylogenetic analyses. In addition, DupTree allows users to examine alternate rootings and to weight the reconciliation costs for gene trees. DupTree is an open source project written in C++. DupTree for Mac OS X, Windows, and Linux along with a sample dataset and an on-line manual are available at http://genome.cs.iastate.edu/CBL/DupTree

  6. Parsimonious data

    DEFF Research Database (Denmark)

    Kristensen, Jakob Baek; Albrechtsen, Thomas; Dahl-Nielsen, Emil

    2017-01-01

    This study shows how liking politicians’ public Facebook posts can be used as an accurate measure for predicting present-day voter intention in a multiparty system. We highlight that a few, but selective digital traces produce prediction accuracies that are on par or even greater than most curren...

  7. On the Quirks of Maximum Parsimony and Likelihood on Phylogenetic Networks

    OpenAIRE

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2015-01-01

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogene...

  8. Models and structures: mathematical physics

    International Nuclear Information System (INIS)

    2003-01-01

    This document gathers research activities along 5 main directions. 1) Quantum chaos and dynamical systems. Recent results concern the extension of the exact WKB method that has led to a host of new results on the spectrum and wave functions. Progress have also been made in the description of the wave functions of chaotic quantum systems. Renormalization has been applied to the analysis of dynamical systems. 2) Combinatorial statistical physics. We see the emergence of new techniques applied to various such combinatorial problems, from random walks to random lattices. 3) Integrability: from structures to applications. Techniques of conformal field theory and integrable model systems have been developed. Progress is still made in particular for open systems with boundary conditions, in connection to strings and branes physics. Noticeable links between integrability and exact WKB quantization to 2-dimensional disordered systems have been highlighted. New correlations of eigenvalues and better connections to integrability have been formulated for random matrices. 4) Gravities and string theories. We have developed aspects of 2-dimensional string theory with a particular emphasis on its connection to matrix models as well as non-perturbative properties of M-theory. We have also followed an alternative path known as loop quantum gravity. 5) Quantum field theory. The results obtained lately concern its foundations, in flat or curved spaces, but also applications to second-order phase transitions in statistical systems

  9. Hybrid modelling of soil-structure interaction for embedded structures

    International Nuclear Information System (INIS)

    Gupta, S.; Penzien, J.

    1981-01-01

    The basic methods currently being used for the analysis of soil-structure interaction fail to properly model three-dimensional embedded structures with flexible foundations. A hybrid model for the analysis of soil-structure interaction is developed in this investigation which takes advantage of the desirable features of both the finite element and substructure methods and which minimizes their undesirable features. The hybrid model is obtained by partitioning the total soil-structure system into a nearfield and a far-field with a smooth hemispherical interface. The near-field consists of the structure and a finite region of soil immediately surrounding its base. The entire near-field may be modelled in three-dimensional form using the finite element method; thus, taking advantage of its ability to model irregular geometries, and the non-linear soil behavior in the immediate vicinity of the structure. (orig./WL)

  10. The application of a social cognition model in explaining fruit intake in Austrian, Norwegian and Spanish schoolchildren using structural equation modelling

    Directory of Open Access Journals (Sweden)

    Pérez-Rodrigo Carmen

    2007-11-01

    Full Text Available Abstract Background The aim of this paper was to test the goodness of fit of the Attitude – Social influence – self-Efficacy (ASE model in explaining schoolchildren's intentions to eat fruit and their actual fruit intake in Austria, Norway and Spain; to assess how well the model could explain the observed variance in intention to eat fruit and in reported fruit intake and to investigate whether the same model would fit data from all three countries. Methods Samples consisted of schoolchildren from three of the countries participating in the cross-sectional part of the Pro Children project. Sample size varied from 991 in Austria to 1297 in Spain. Mean age ranged from 11.3 to 11.4 years. The initial model was designed using items and constructs from the Pro Children study. Factor analysis was conducted to test the structure of the measures in the model. The Norwegian sample was used to test the latent variable structure, to make a preliminary assessment of model fit, and to modify the model to increase goodness of fit with the data. The original and modified models were then applied to the Austrian and Spanish samples. All model analyses were carried out using structural equation modelling techniques. Results The ASE-model fitted the Norwegian and Spanish data well. For Austria, a slightly more complex model was needed. For this reason multi-sample analysis to test equality in factor structure and loadings across countries could not be used. The models explained between 51% and 69% of the variance in intention to eat fruit, and 27% to 38% of the variance in reported fruit intake. Conclusion Structural equation modelling showed that a rather parsimonious model was useful in explaining the variation in fruit intake of 11-year-old schoolchildren in Norway and Spain. For Austria, more modifications were needed to fit the data.

  11. Parametric structural modeling of insect wings

    International Nuclear Information System (INIS)

    Mengesha, T E; Vallance, R R; Barraja, M; Mittal, R

    2009-01-01

    Insects produce thrust and lift forces via coupled fluid-structure interactions that bend and twist their compliant wings during flapping cycles. Insight into this fluid-structure interaction is achieved with numerical modeling techniques such as coupled finite element analysis and computational fluid dynamics, but these methods require accurate and validated structural models of insect wings. Structural models of insect wings depend principally on the shape, dimensions and material properties of the veins and membrane cells. This paper describes a method for parametric modeling of wing geometry using digital images and demonstrates the use of the geometric models in constructing three-dimensional finite element (FE) models and simple reduced-order models. The FE models are more complete and accurate than previously reported models since they accurately represent the topology of the vein network, as well as the shape and dimensions of the veins and membrane cells. The methods are demonstrated by developing a parametric structural model of a cicada forewing.

  12. Structure functions from chiral soliton models

    International Nuclear Information System (INIS)

    Weigel, H.; Reinhardt, H.; Gamberg, L.

    1997-01-01

    We study nucleon structure functions within the bosonized Nambu-Jona-Lasinio (NJL) model where the nucleon emerges as a chiral soliton. We discuss the model predictions on the Gottfried sum rule for electron-nucleon scattering. A comparison with a low-scale parametrization shows that the model reproduces the gross features of the empirical structure functions. We also compute the leading twist contributions of the polarized structure functions g 1 and g 2 in this model. We compare the model predictions on these structure functions with data from the E143 experiment by GLAP evolving them from the scale characteristic for the NJL-model to the scale of the data

  13. Analyzing Oil Futures with a Dynamic Nelson-Siegel Model

    DEFF Research Database (Denmark)

    Hansen, Niels Strange; Lunde, Asger

    In this paper we are interested in the term structure of futures contracts on oil. The objective is to specify a relatively parsimonious model which explains data well and performs well in a real time out of sample forecasting. The dynamic Nelson-Siegel model is normally used to analyze and forec......In this paper we are interested in the term structure of futures contracts on oil. The objective is to specify a relatively parsimonious model which explains data well and performs well in a real time out of sample forecasting. The dynamic Nelson-Siegel model is normally used to analyze...... and forecast interest rates of different maturities. The structure of oil futures resembles the structure of interest rates and this motivates the use of this model for our purposes. The data set is vast and the dynamic Nelson-Siegel model allows for a significant dimension reduction by introducing three...

  14. Integrative structure modeling with the Integrative Modeling Platform.

    Science.gov (United States)

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  15. Relating structure and dynamics in organisation models

    NARCIS (Netherlands)

    Jonkers, C.M.; Treur, J.

    2002-01-01

    To understand how an organisational structure relates to dynamics is an interesting fundamental challenge in the area of social modelling. Specifications of organisational structure usually have a diagrammatic form that abstracts from more detailed dynamics. Dynamic properties of agent systems,

  16. Modeling and identification in structural dynamics

    OpenAIRE

    Jayakumar, Paramsothy

    1987-01-01

    Analytical modeling of structures subjected to ground motions is an important aspect of fully dynamic earthquake-resistant design. In general, linear models are only sufficient to represent structural responses resulting from earthquake motions of small amplitudes. However, the response of structures during strong ground motions is highly nonlinear and hysteretic. System identification is an effective tool for developing analytical models from experimental data. Testing of full-scale prot...

  17. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions

    NARCIS (Netherlands)

    Verdam, Mathilde G. E.; Oort, Frans J.

    2014-01-01

    Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone

  18. Composite scores in comparative effectiveness research: counterbalancing parsimony and dimensionality in patient-reported outcomes.

    Science.gov (United States)

    Schwartz, Carolyn E; Patrick, Donald L

    2014-07-01

    When planning a comparative effectiveness study comparing disease-modifying treatments, competing demands influence choice of outcomes. Current practice emphasizes parsimony, although understanding multidimensional treatment impact can help to personalize medical decision-making. We discuss both sides of this 'tug of war'. We discuss the assumptions, advantages and drawbacks of composite scores and multidimensional outcomes. We describe possible solutions to the multiple comparison problem, including conceptual hierarchy distinctions, statistical approaches, 'real-world' benchmarks of effectiveness and subgroup analysis. We conclude that comparative effectiveness research should consider multiple outcome dimensions and compare different approaches that fit the individual context of study objectives.

  19. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  20. Antibody structural modeling with prediction of immunoglobulin structure (PIGS)

    KAUST Repository

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna; Tramontano, Anna

    2014-01-01

    of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (~10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody

  1. Mathematical Modeling: A Structured Process

    Science.gov (United States)

    Anhalt, Cynthia Oropesa; Cortez, Ricardo

    2015-01-01

    Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…

  2. Relativistic models of nuclear structure

    International Nuclear Information System (INIS)

    Gillet, V.; Kim, E.J.; Cauvin, M.; Kohmura, T.; Ohnaka, S.

    1991-01-01

    The introduction of the relativistic field formalism for the description of nuclear structure has improved our understanding of fundamental nuclear mechanisms such as saturation or many body forces. We discuss some of these progresses, both in the semi-classical mean field approximation and in a quantized meson field approach. (author)

  3. Probabilistic models for structured sparsity

    DEFF Research Database (Denmark)

    Andersen, Michael Riis

    sparse solutions to linear inverse problems. In this part, the sparsity promoting prior known as the spike-and-slab prior (Mitchell and Beauchamp, 1988) is generalized to the structured sparsity setting. An expectation propagation algorithm is derived for approximate posterior inference. The proposed...

  4. Modelling the harmonized tertiary Institutions Salary Structure ...

    African Journals Online (AJOL)

    This paper analyses the Harmonized Tertiary Institution Salary Structure (HATISS IV) used in Nigeria. The irregularities in the structure are highlighted. A model that assumes a polynomial trend for the zero step salary, and exponential trend for the incremental rates, is suggested for the regularization of the structure.

  5. Structural Equation Modeling of Multivariate Time Series

    Science.gov (United States)

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  6. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  7. Residual Structures in Latent Growth Curve Modeling

    Science.gov (United States)

    Grimm, Kevin J.; Widaman, Keith F.

    2010-01-01

    Several alternatives are available for specifying the residual structure in latent growth curve modeling. Two specifications involve uncorrelated residuals and represent the most commonly used residual structures. The first, building on repeated measures analysis of variance and common specifications in multilevel models, forces residual variances…

  8. A Teaching Model for Truss Structures

    Science.gov (United States)

    Bigoni, Davide; Dal Corso, Francesco; Misseroni, Diego; Tommasini, Mirko

    2012-01-01

    A classroom demonstration model has been designed, machined and successfully tested in different learning environments to facilitate understanding of the mechanics of truss structures, in which struts are subject to purely axial load and deformation. Gaining confidence with these structures is crucial for the development of lattice models, which…

  9. Exploring RNA structure by integrative molecular modelling

    DEFF Research Database (Denmark)

    Masquida, Benoît; Beckert, Bertrand; Jossinet, Fabrice

    2010-01-01

    RNA molecular modelling is adequate to rapidly tackle the structure of RNA molecules. With new structured RNAs constituting a central class of cellular regulators discovered every year, the need for swift and reliable modelling methods is more crucial than ever. The pragmatic method based...... on interactive all-atom molecular modelling relies on the observation that specific structural motifs are recurrently found in RNA sequences. Once identified by a combination of comparative sequence analysis and biochemical data, the motifs composing the secondary structure of a given RNA can be extruded...

  10. Parsimonious wave-equation travel-time inversion for refraction waves

    KAUST Repository

    Fu, Lei

    2017-02-14

    We present a parsimonious wave-equation travel-time inversion technique for refraction waves. A dense virtual refraction dataset can be generated from just two reciprocal shot gathers for the sources at the endpoints of the survey line, with N geophones evenly deployed along the line. These two reciprocal shots contain approximately 2N refraction travel times, which can be spawned into O(N2) refraction travel times by an interferometric transformation. Then, these virtual refraction travel times are used with a source wavelet to create N virtual refraction shot gathers, which are the input data for wave-equation travel-time inversion. Numerical results show that the parsimonious wave-equation travel-time tomogram has about the same accuracy as the tomogram computed by standard wave-equation travel-time inversion. The most significant benefit is that a reciprocal survey is far less time consuming than the standard refraction survey where a source is excited at each geophone location.

  11. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  12. The scenario on the origin of translation in the RNA world: in principle of replication parsimony

    Directory of Open Access Journals (Sweden)

    Ma Wentao

    2010-11-01

    Full Text Available Abstract Background It is now believed that in the origin of life, proteins should have been "invented" in an RNA world. However, due to the complexity of a possible RNA-based proto-translation system, this evolving process seems quite complicated and the associated scenario remains very blurry. Considering that RNA can bind amino acids with specificity, it has been reasonably supposed that initial peptides might have been synthesized on "RNA templates" containing multiple amino acid binding sites. This "Direct RNA Template (DRT" mechanism is attractive because it should be the simplest mechanism for RNA to synthesize peptides, thus very likely to have been adopted initially in the RNA world. Then, how this mechanism could develop into a proto-translation system mechanism is an interesting problem. Presentation of the hypothesis Here an explanation to this problem is shown considering the principle of "replication parsimony" --- genetic information tends to be utilized in a parsimonious way under selection pressure, due to its replication cost (e.g., in the RNA world, nucleotides and ribozymes for RNA replication. Because a DRT would be quite long even for a short peptide, its replication cost would be great. Thus the diversity and the length of functional peptides synthesized by the DRT mechanism would be seriously limited. Adaptors (proto-tRNAs would arise to allow a DRT's complementary strand (called "C-DRT" here to direct the synthesis of the same peptide synthesized by the DRT itself. Because the C-DRT is a necessary part in the DRT's replication, fewer turns of the DRT's replication would be needed to synthesize definite copies of the functional peptide, thus saving the replication cost. Acting through adaptors, C-DRTs could transform into much shorter templates (called "proto-mRNAs" here and substitute the role of DRTs, thus significantly saving the replication cost. A proto-rRNA corresponding to the small subunit rRNA would then emerge

  13. An evolving network model with community structure

    International Nuclear Information System (INIS)

    Li Chunguang; Maini, Philip K

    2005-01-01

    Many social and biological networks consist of communities-groups of nodes within which connections are dense, but between which connections are sparser. Recently, there has been considerable interest in designing algorithms for detecting community structures in real-world complex networks. In this paper, we propose an evolving network model which exhibits community structure. The network model is based on the inner-community preferential attachment and inter-community preferential attachment mechanisms. The degree distributions of this network model are analysed based on a mean-field method. Theoretical results and numerical simulations indicate that this network model has community structure and scale-free properties

  14. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  15. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  16. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  17. Tree-Structured Digital Organisms Model

    Science.gov (United States)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  18. Structured statistical models of inductive reasoning.

    Science.gov (United States)

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.

  19. Modeling protein structures: construction and their applications.

    Science.gov (United States)

    Ring, C S; Cohen, F E

    1993-06-01

    Although no general solution to the protein folding problem exists, the three-dimensional structures of proteins are being successfully predicted when experimentally derived constraints are used in conjunction with heuristic methods. In the case of interleukin-4, mutagenesis data and CD spectroscopy were instrumental in the accurate assignment of secondary structure. In addition, the tertiary structure was highly constrained by six cysteines separated by many residues that formed three disulfide bridges. Although the correct structure was a member of a short list of plausible structures, the "best" structure was the topological enantiomer of the experimentally determined conformation. For many proteases, other experimentally derived structures can be used as templates to identify the secondary structure elements. In a procedure called modeling by homology, the structure of a known protein is used as a scaffold to predict the structure of another related protein. This method has been used to model a serine and a cysteine protease that are important in the schistosome and malarial life cycles, respectively. The model structures were then used to identify putative small molecule enzyme inhibitors computationally. Experiments confirm that some of these nonpeptidic compounds are active at concentrations of less than 10 microM.

  20. A first course in structural equation modeling

    CERN Document Server

    Raykov, Tenko

    2012-01-01

    In this book, authors Tenko Raykov and George A. Marcoulides introduce students to the basics of structural equation modeling (SEM) through a conceptual, nonmathematical approach. For ease of understanding, the few mathematical formulas presented are used in a conceptual or illustrative nature, rather than a computational one.Featuring examples from EQS, LISREL, and Mplus, A First Course in Structural Equation Modeling is an excellent beginner's guide to learning how to set up input files to fit the most commonly used types of structural equation models with these programs. The basic ideas and methods for conducting SEM are independent of any particular software.Highlights of the Second Edition include: Review of latent change (growth) analysis models at an introductory level Coverage of the popular Mplus program Updated examples of LISREL and EQS A CD that contains all of the text's LISREL, EQS, and Mplus examples.A First Course in Structural Equation Modeling is intended as an introductory book for students...

  1. Capital Structure: Target Adjustment Model and a Mediation Moderation Model with Capital Structure as Mediator

    OpenAIRE

    Abedmajid, Mohammed

    2015-01-01

    This study consists of two models. Model one is conducted to check if there is a target adjustment toward optimal capital structure, in the context of Turkish firm listed on the stock market, over the period 2003-2014. Model 2 captures the interaction between firm size, profitability, market value and capital structure using the moderation mediation model. The results of model 1 have shown that there is a partial adjustment of the capital structure to reach target levels. The results of...

  2. Modeling of soil-water-structure interaction

    DEFF Research Database (Denmark)

    Tang, Tian

    as the developed nonlinear soil displacements and stresses under monotonic and cyclic loading. With the FVM nonlinear coupled soil models as a basis, multiphysics modeling of wave-seabed-structure interaction is carried out. The computations are done in an open source code environment, OpenFOAM, where FVM models...

  3. Multiplicity Control in Structural Equation Modeling

    Science.gov (United States)

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  4. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  5. Intelligent-based Structural Damage Detection Model

    International Nuclear Information System (INIS)

    Lee, Eric Wai Ming; Yu, K.F.

    2010-01-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  6. Intelligent-based Structural Damage Detection Model

    Science.gov (United States)

    Lee, Eric Wai Ming; Yu, Kin Fung

    2010-05-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  7. Structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions of an isoscalar nuclear target for the deep inelastic scattering by leptons in an extended version of the chiral bag model which incorporates the qanti q structure of the pions in the cloud. Bjorken scaling and Regge behavior are satisfied. The model calculation reproduces the low-x behavior of the data but fails to explain the medium- to large-x behavior. Evolution of the quark structure functions seem inevitable to attempt a connection between the low-energy models and the high-energy behavior of quantum chromodynamics. (orig.)

  8. Structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V.; Vento, V.

    1989-07-13

    We calculate the structure functions of an isoscalar nuclear target for the deep inelastic scattering by leptons in an extended version of the chiral bag model which incorporates the qanti q structure of the pions in the cloud. Bjorken scaling and Regge behavior are satisfied. The model calculation reproduces the low-x behavior of the data but fails to explain the medium- to large-x behavior. Evolution of the quark structure functions seem inevitable to attempt a connection between the low-energy models and the high-energy behavior of quantum chromodynamics. (orig.).

  9. Structural classification and a binary structure model for superconductors

    Institute of Scientific and Technical Information of China (English)

    Dong Cheng

    2006-01-01

    Based on structural and bonding features, a new classification scheme of superconductors is proposed to classify conductors can be partitioned into two parts, a superconducting active component and a supplementary component.Partially metallic covalent bonding is found to be a common feature in all superconducting active components, and the electron states of the atoms in the active components usually make a dominant contribution to the energy band near the Fermi surface. Possible directions to explore new superconductors are discussed based on the structural classification and the binary structure model.

  10. Automated Protein Structure Modeling with SWISS-MODEL Workspace and the Protein Model Portal

    OpenAIRE

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of appl...

  11. MMM: A toolbox for integrative structure modeling.

    Science.gov (United States)

    Jeschke, Gunnar

    2018-01-01

    Structural characterization of proteins and their complexes may require integration of restraints from various experimental techniques. MMM (Multiscale Modeling of Macromolecules) is a Matlab-based open-source modeling toolbox for this purpose with a particular emphasis on distance distribution restraints obtained from electron paramagnetic resonance experiments on spin-labelled proteins and nucleic acids and their combination with atomistic structures of domains or whole protomers, small-angle scattering data, secondary structure information, homology information, and elastic network models. MMM does not only integrate various types of restraints, but also various existing modeling tools by providing a common graphical user interface to them. The types of restraints that can support such modeling and the available model types are illustrated by recent application examples. © 2017 The Protein Society.

  12. Structural modeling for multicell composite rotor blades

    Science.gov (United States)

    Rehfield, Lawrence W.; Atilgan, Ali R.

    1987-01-01

    Composite material systems are currently good candidates for aerospace structures, primarily for the design flexibility they offer, i.e., it is possible to tailor the material and manufacturing approach to the application. A working definition of elastic or structural tailoring is the use of structural concept, fiber orientation, ply stacking sequence, and a blend of materials to achieve specific performance goals. In the design process, choices of materials and dimensions are made which produce specific response characteristics, and which permit the selected goals to be achieved. Common choices for tailoring goals are preventing instabilities or vibration resonances or enhancing damage tolerance. An essential, enabling factor in the design of tailored composite structures is structural modeling that accurately, but simply, characterizes response. The objective of this paper is to present a new multicell beam model for composite rotor blades and to validate predictions based on the new model by comparison with a finite element simulation in three benchmark static load cases.

  13. Observations and Modeling of Atmospheric Radiance Structure

    National Research Council Canada - National Science Library

    Wintersteiner, Peter

    2001-01-01

    The overall purpose of the work that we have undertaken is to provide new capabilities for observing and modeling structured radiance in the atmosphere, particularly the non-LTE regions of the atmosphere...

  14. VISCOELASTIC STRUCTURAL MODEL OF ASPHALT CONCRETE

    Directory of Open Access Journals (Sweden)

    V. Bogomolov

    2016-06-01

    Full Text Available The viscoelastic rheological model of asphalt concrete based on the generalized Kelvin model is offered. The mathematical model of asphalt concrete viscoelastic behavior that can be used for calculation of asphalt concrete upper layers of non-rigid pavements for strength and rutting has been developed. It has been proved that the structural model of Burgers does not fully meet all the requirements of the asphalt-concrete.

  15. Global model structures for ∗-modules

    DEFF Research Database (Denmark)

    Böhme, Benjamin

    2018-01-01

    We extend Schwede's work on the unstable global homotopy theory of orthogonal spaces and L-spaces to the category of ∗-modules (i.e., unstable S-modules). We prove a theorem which transports model structures and their properties from L-spaces to ∗-modules and show that the resulting global model...... structure for ∗-modules is monoidally Quillen equivalent to that of orthogonal spaces. As a consequence, there are induced Quillen equivalences between the associated model categories of monoids, which identify equivalent models for the global homotopy theory of A∞-spaces....

  16. More quality measures versus measuring what matters: a call for balance and parsimony.

    Science.gov (United States)

    Meyer, Gregg S; Nelson, Eugene C; Pryor, David B; James, Brent; Swensen, Stephen J; Kaplan, Gary S; Weissberg, Jed I; Bisognano, Maureen; Yates, Gary R; Hunt, Gordon C

    2012-11-01

    External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Although benefits have accrued from the growth in quality measurement, the recent explosion in the number of measures threatens to shift resources from improving quality to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care, and lower per capita costs). Here we propose a policy that quality measurement should be: balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs.

  17. Time-Lapse Monitoring of Subsurface Fluid Flow using Parsimonious Seismic Interferometry

    KAUST Repository

    Hanafy, Sherif

    2017-04-21

    A typical small-scale seismic survey (such as 240 shot gathers) takes at least 16 working hours to be completed, which is a major obstacle in case of time-lapse monitoring experiments. This is especially true if the subject that needs to be monitored is rapidly changing. In this work, we will discuss how to decrease the recording time from 16 working hours to less than one hour of recording. Here, the virtual data has the same accuracy as the conventional data. We validate the efficacy of parsimonious seismic interferometry with the time-lapse mentoring idea with field examples, where we were able to record 30 different data sets within a 2-hour period. The recorded data are then processed to generate 30 snapshots that shows the spread of water from the ground surface down to a few meters.

  18. Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test

    Science.gov (United States)

    Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.

    We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.

  19. Linear causal modeling with structural equations

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  20. Relating structure and dynamics in organisation models

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.

    2003-01-01

    To understand how an organisational structure relates to dynamics is an interesting fundamental challenge in the area of social modelling. Specifications of organisational structure usually have a diagrammatic form that abstracts from more detailed dynamics. Dynamic properties of agent systems, on

  1. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  2. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  3. Structured population models in biology and epidemiology

    CERN Document Server

    Ruan, Shigui

    2008-01-01

    This book consists of six chapters written by leading researchers in mathematical biology. These chapters present recent and important developments in the study of structured population models in biology and epidemiology. Topics include population models structured by age, size, and spatial position; size-structured models for metapopulations, macroparasitc diseases, and prion proliferation; models for transmission of microparasites between host populations living on non-coincident spatial domains; spatiotemporal patterns of disease spread; method of aggregation of variables in population dynamics; and biofilm models. It is suitable as a textbook for a mathematical biology course or a summer school at the advanced undergraduate and graduate level. It can also serve as a reference book for researchers looking for either interesting and specific problems to work on or useful techniques and discussions of some particular problems.

  4. Structural Identifiability of Dynamic Systems Biology Models.

    Science.gov (United States)

    Villaverde, Alejandro F; Barreiro, Antonio; Papachristodoulou, Antonis

    2016-10-01

    A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas.

  5. Numerical Modelling of Structures with Uncertainties

    Directory of Open Access Journals (Sweden)

    Kahsin Maciej

    2017-04-01

    Full Text Available The nature of environmental interactions, as well as large dimensions and complex structure of marine offshore objects, make designing, building and operation of these objects a great challenge. This is the reason why a vast majority of investment cases of this type include structural analysis, performed using scaled laboratory models and complemented by extended computer simulations. The present paper focuses on FEM modelling of the offshore wind turbine supporting structure. Then problem is studied using the modal analysis, sensitivity analysis, as well as the design of experiment (DOE and response surface model (RSM methods. The results of modal analysis based simulations were used for assessing the quality of the FEM model against the data measured during the experimental modal analysis of the scaled laboratory model for different support conditions. The sensitivity analysis, in turn, has provided opportunities for assessing the effect of individual FEM model parameters on the dynamic response of the examined supporting structure. The DOE and RSM methods allowed to determine the effect of model parameter changes on the supporting structure response.

  6. Antibody structural modeling with prediction of immunoglobulin structure (PIGS)

    DEFF Research Database (Denmark)

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna

    2014-01-01

    Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful...... applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (∼10 min...... on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together....

  7. Antibody structural modeling with prediction of immunoglobulin structure (PIGS)

    KAUST Repository

    Marcatili, Paolo

    2014-11-06

    © 2014 Nature America, Inc. All rights reserved. Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (~10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together.

  8. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  9. Remote sensing approach to structural modelling

    International Nuclear Information System (INIS)

    El Ghawaby, M.A.

    1989-01-01

    Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs

  10. Impact damages modeling in laminated composite structures

    Directory of Open Access Journals (Sweden)

    Kreculj Dragan D.

    2014-01-01

    Full Text Available Laminated composites have an important application in modern engineering structures. They are characterized by extraordinary properties, such as: high strength and stiffness and lightweight. Nevertheless, a serious obstacle to more widespread use of those materials is their sensitivity to the impact loads. Impacts cause initiation and development of certain types of damages. Failures that occur in laminated composite structures can be intralaminar and interlaminar. To date it was developed a lot of simulation models for impact damages analysis in laminates. Those models can replace real and expensive testing in laminated structures with a certain accuracy. By using specialized software the damage parameters and distributions can be determined (at certain conditions on laminate structures. With performing numerical simulation of impact on composite laminates there are corresponding results valid for the analysis of these structures.

  11. On the Use of Structural Equation Models in Marketing Modeling

    NARCIS (Netherlands)

    Steenkamp, J.E.B.M.; Baumgartner, H.

    2000-01-01

    We reflect on the role of structural equation modeling (SEM) in marketing modeling and managerial decision making. We discuss some benefits provided by SEM and alert marketing modelers to several recent developments in SEM in three areas: measurement analysis, analysis of cross-sectional data, and

  12. Emulating a flexible space structure: Modeling

    Science.gov (United States)

    Waites, H. B.; Rice, S. C.; Jones, V. L.

    1988-01-01

    Control Dynamics, in conjunction with Marshall Space Flight Center, has participated in the modeling and testing of Flexible Space Structures. Through the series of configurations tested and the many techniques used for collecting, analyzing, and modeling the data, many valuable insights have been gained and important lessons learned. This paper discusses the background of the Large Space Structure program, Control Dynamics' involvement in testing and modeling of the configurations (especially the Active Control Technique Evaluation for Spacecraft (ACES) configuration), the results from these two processes, and insights gained from this work.

  13. Power mos devices: structures and modelling procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rossel, P.; Charitat, G.; Tranduc, H.; Morancho, F.; Moncoqut

    1997-05-01

    In this survey, the historical evolution of power MOS transistor structures is presented and currently used devices are described. General considerations on current and voltage capabilities are discussed and configurations of popular structures are given. A synthesis of different modelling approaches proposed last three years is then presented, including analytical solutions, for basic electrical parameters such as threshold voltage, on-resistance, saturation and quasi-saturation effects, temperature influence and voltage handling capability. The numerical solutions of basic semiconductor devices is then briefly reviewed along with some typical problems which can be solved this way. A compact circuit modelling method is finally explained with emphasis on dynamic behavior modelling

  14. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  15. Time series modelling of overflow structures

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.

    1997-01-01

    The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...... and available storage capacity in the pipe as well as predictions of future states. A linear overflow relation is found, differing significantly from the traditional modelling approach. This is due to complicated overflow structures in a hydraulic sense where the overflow is governed by inertia from the inflow...... to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published...

  16. Modeling and control of flexible space structures

    Science.gov (United States)

    Wie, B.; Bryson, A. E., Jr.

    1981-01-01

    The effects of actuator and sensor locations on transfer function zeros are investigated, using uniform bars and beams as generic models of flexible space structures. It is shown how finite element codes may be used directly to calculate transfer function zeros. The impulse response predicted by finite-dimensional models is compared with the exact impulse response predicted by the infinite dimensional models. It is shown that some flexible structures behave as if there were a direct transmission between actuator and sensor (equal numbers of zeros and poles in the transfer function). Finally, natural damping models for a vibrating beam are investigated since natural damping has a strong influence on the appropriate active control logic for a flexible structure.

  17. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  18. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  19. Automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

    Science.gov (United States)

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of applications. Since the usefulness of a model for specific application is determined by its accuracy, model quality estimation is an essential component of protein structure prediction. Comparative protein modeling has become a routine approach in many areas of life science research since fully automated modeling systems allow also nonexperts to build reliable models. In this chapter, we describe practical approaches for automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

  20. On modeling of structured multiphase mixtures

    International Nuclear Information System (INIS)

    Dobran, F.

    1987-01-01

    The usual modeling of multiphase mixtures involves a set of conservation and balance equations of mass, momentum, energy and entropy (the basic set) constructed by an averaging procedure or postulated. The averaged models are constructed by averaging, over space or time segments, the local macroscopic field equations of each phase, whereas the postulated models are usually motivated by the single phase multicomponent mixture models. In both situations, the resulting equations yield superimposed continua models and are closed by the constitutive equations which place restrictions on the possible material response during the motion and phase change. In modeling the structured multiphase mixtures, the modeling of intrinsic motion of grains or particles is accomplished by adjoining to the basic set of field equations the additional balance equations, thereby placing restrictions on the motion of phases only within the imposed extrinsic and intrinsic sources. The use of the additional balance equations has been primarily advocated in the postulatory theories of multiphase mixtures and are usually derived through very special assumptions of the material deformation. Nevertheless, the resulting mixture models can predict a wide variety of complex phenomena such as the Mohr-Coulomb yield criterion in granular media, Rayleigh bubble equation, wave dispersion and dilatancy. Fundamental to the construction of structured models of multiphase mixtures are the problems pertaining to the existence and number of additional balance equations to model the structural characteristics of a mixture. Utilizing a volume averaging procedure it is possible not only to derive the basic set of field equation discussed above, but also a very general set of additional balance equations for modeling of structural properties of the mixture

  1. Hypothesis of the Disappearance of the Limits of Improvidence and Parsimony in the Function of Consumption in an Islamic Economy

    Directory of Open Access Journals (Sweden)

    محمد أحمد حسن الأفندي

    2018-04-01

    Full Text Available There is a rich literature about the analysis of consumption behavior from the perspective of Islamic economy. The focus of such literature has been on the incorporation of the effect of moral values on individuals’ consumption behavior. However, studies on consumption did not pay enough heed to the analysis of the ultimate effect of faith values on the track of consumption behavior over time. This desired track of consumption involves showing certain hypotheses and probabilities. This study suggests a normative statement which includes the gradual disappearance of parsimony and improvidence over time. This disappearance would correct the deviation of actual consumption of society members from the desired moderate consumption level, so as to make households’ consumption behavior at the desired level which is consistent with Islamic Sharia. The study emphasizes the need to develop analysis and research in two integrated directions: i conducting more empirical studies to examine the consistency of the normative statement with evidence from real situations, and ii conducting more analysis to develop a specific measure for the desired consumption levels as well as the limits of parsimony and improvidence. Keywords: Disappearance of improvidence and parsimony limits, Desired moderate consumption level, Actual consumption, Improvidence and parsimony consumption levels, Track of households’ consumption behavior.

  2. Design and Modeling of Structural Joints in Precast Concrete Structures

    DEFF Research Database (Denmark)

    Sørensen, Jesper Harrild

    and in the onsite construction speed. The challenges appear in the on-site assembly phase, where structural integrity has to be ensured by in-situ cast connections in narrow zones. These connections are essential for the overall structural behavior and for this reason, strong and ductile connections...... is the orientation of the U-bar loops and the use of a double T-headed rebar in the overlapping area of the Ubars. The investigation covers several independent research topics, which in combination provides a broad knowledge of the behavior of keyed shear connections. As the first topic, the structural behavior...... the loop connection in such a way, that the tensile capacity is governed by yielding of the U-bars and not by a brittle failure of the grout. This is important in order to obtain a ductile response when the connection is loaded in shear. The main focus of the thesis is test and modeling of keyed shear...

  3. The WITCH Model. Structure, Baseline, Solutions.

    Energy Technology Data Exchange (ETDEWEB)

    Bosetti, V.; Massetti, E.; Tavoni, M.

    2007-07-01

    WITCH - World Induced Technical Change Hybrid - is a regionally disaggregated hard link hybrid global model with a neoclassical optimal growth structure (top down) and an energy input detail (bottom up). The model endogenously accounts for technological change, both through learning curves affecting prices of new vintages of capital and through R and D investments. The model features the main economic and environmental policies in each world region as the outcome of a dynamic game. WITCH belongs to the class of Integrated Assessment Models as it possesses a climate module that feeds climate changes back into the economy. In this paper we provide a thorough discussion of the model structure and baseline projections. We report detailed information on the evolution of energy demand, technology and CO2 emissions. Finally, we explicitly quantifiy the role of free riding in determining the emissions scenarios. (auth)

  4. Modeling Fission Product Sorption in Graphite Structures

    International Nuclear Information System (INIS)

    Szlufarska, Izabela; Morgan, Dane; Allen, Todd

    2013-01-01

    The goal of this project is to determine changes in adsorption and desorption of fission products to/from nuclear-grade graphite in response to a changing chemical environment. First, the project team will employ principle calculations and thermodynamic analysis to predict stability of fission products on graphite in the presence of structural defects commonly observed in very high-temperature reactor (VHTR) graphites. Desorption rates will be determined as a function of partial pressure of oxygen and iodine, relative humidity, and temperature. They will then carry out experimental characterization to determine the statistical distribution of structural features. This structural information will yield distributions of binding sites to be used as an input for a sorption model. Sorption isotherms calculated under this project will contribute to understanding of the physical bases of the source terms that are used in higher-level codes that model fission product transport and retention in graphite. The project will include the following tasks: Perform structural characterization of the VHTR graphite to determine crystallographic phases, defect structures and their distribution, volume fraction of coke, and amount of sp2 versus sp3 bonding. This information will be used as guidance for ab initio modeling and as input for sorptivity models; Perform ab initio calculations of binding energies to determine stability of fission products on the different sorption sites present in nuclear graphite microstructures. The project will use density functional theory (DFT) methods to calculate binding energies in vacuum and in oxidizing environments. The team will also calculate stability of iodine complexes with fission products on graphite sorption sites; Model graphite sorption isotherms to quantify concentration of fission products in graphite. The binding energies will be combined with a Langmuir isotherm statistical model to predict the sorbed concentration of fission products

  5. Galactic models with variable spiral structure

    International Nuclear Information System (INIS)

    James, R.A.; Sellwood, J.A.

    1978-01-01

    A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

  6. Outlier Detection in Structural Time Series Models

    DEFF Research Database (Denmark)

    Marczak, Martyna; Proietti, Tommaso

    investigate via Monte Carlo simulations how this approach performs for detecting additive outliers and level shifts in the analysis of nonstationary seasonal time series. The reference model is the basic structural model, featuring a local linear trend, possibly integrated of order two, stochastic seasonality......Structural change affects the estimation of economic signals, like the underlying growth rate or the seasonally adjusted series. An important issue, which has attracted a great deal of attention also in the seasonal adjustment literature, is its detection by an expert procedure. The general......–to–specific approach to the detection of structural change, currently implemented in Autometrics via indicator saturation, has proven to be both practical and effective in the context of stationary dynamic regression models and unit–root autoregressions. By focusing on impulse– and step–indicator saturation, we...

  7. Statistical Analysis and Modelling of Olkiluoto Structures

    International Nuclear Information System (INIS)

    Hellae, P.; Vaittinen, T.; Saksa, P.; Nummela, J.

    2004-11-01

    Posiva Oy is carrying out investigations for the disposal of the spent nuclear fuel at the Olkiluoto site in SW Finland. The investigations have focused on the central part of the island. The layout design of the entire repository requires characterization of notably larger areas and must rely at least at the current stage on borehole information from a rather sparse network and on the geophysical soundings providing information outside and between the holes. In this work, the structural data according to the current version of the Olkiluoto bedrock model is analyzed. The bedrock model relies much on the borehole data although results of the seismic surveys and, for example, pumping tests are used in determining the orientation and continuation of the structures. Especially in the analysis, questions related to the frequency of structures and size of the structures are discussed. The structures observed in the boreholes are mainly dipping gently to the southeast. About 9 % of the sample length belongs to structures. The proportion is higher in the upper parts of the rock. The number of fracture and crushed zones seems not to depend greatly on the depth, whereas the hydraulic features concentrate on the depth range above -100 m. Below level -300 m, the hydraulic conductivity occurs in connection of fractured zones. Especially the hydraulic features, but also fracture and crushed zones often occur in groups. The frequency of the structure (area of structures per total volume) is estimated to be of the order of 1/100m. The size of the local structures was estimated by calculating the intersection of the zone to the nearest borehole where the zone has not been detected. Stochastic models using the Fracman software by Golder Associates were generated based on the bedrock model data complemented with the magnetic ground survey data. The seismic surveys (from boreholes KR5, KR13, KR14, and KR19) were used as alternative input data. The generated models were tested by

  8. Exploring Social Structures in Extended Team Model

    DEFF Research Database (Denmark)

    Zahedi, Mansooreh; Ali Babar, Muhammad

    2013-01-01

    Extended Team Model (ETM) as a type of offshore outsourcing is increasingly becoming popular mode of Global Software Development (GSD). There is little knowledge about the social structures in ETM and their impact on collaboration. Within a large interdisciplinary project to develop the next...... generation of GSD technologies, we are exploring the role of social structures to support collaboration. This paper reports some details of our research design and initial findings about the mechanisms to support social structures and their impact on collaboration in an ETM....

  9. Parsimonious classification of binary lacunarity data computed from food surface images using kernel principal component analysis and artificial neural networks.

    Science.gov (United States)

    Iqbal, Abdullah; Valous, Nektarios A; Sun, Da-Wen; Allen, Paul

    2011-02-01

    Lacunarity is about quantifying the degree of spatial heterogeneity in the visual texture of imagery through the identification of the relationships between patterns and their spatial configurations in a two-dimensional setting. The computed lacunarity data can designate a mathematical index of spatial heterogeneity, therefore the corresponding feature vectors should possess the necessary inter-class statistical properties that would enable them to be used for pattern recognition purposes. The objectives of this study is to construct a supervised parsimonious classification model of binary lacunarity data-computed by Valous et al. (2009)-from pork ham slice surface images, with the aid of kernel principal component analysis (KPCA) and artificial neural networks (ANNs), using a portion of informative salient features. At first, the dimension of the initial space (510 features) was reduced by 90% in order to avoid any noise effects in the subsequent classification. Then, using KPCA, the first nineteen kernel principal components (99.04% of total variance) were extracted from the reduced feature space, and were used as input in the ANN. An adaptive feedforward multilayer perceptron (MLP) classifier was employed to obtain a suitable mapping from the input dataset. The correct classification percentages for the training, test and validation sets were 86.7%, 86.7%, and 85.0%, respectively. The results confirm that the classification performance was satisfactory. The binary lacunarity spatial metric captured relevant information that provided a good level of differentiation among pork ham slice images. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  10. Evolving the structure of hidden Markov Models

    DEFF Research Database (Denmark)

    won, K. J.; Prugel-Bennett, A.; Krogh, A.

    2006-01-01

    A genetic algorithm (GA) is proposed for finding the structure of hidden Markov Models (HMMs) used for biological sequence analysis. The GA is designed to preserve biologically meaningful building blocks. The search through the space of HMM structures is combined with optimization of the emission...... and transition probabilities using the classic Baum-Welch algorithm. The system is tested on the problem of finding the promoter and coding region of C. jejuni. The resulting HMM has a superior discrimination ability to a handcrafted model that has been published in the literature....

  11. Principles and practice of structural equation modeling

    CERN Document Server

    Kline, Rex B

    2015-01-01

    Emphasizing concepts and rationale over mathematical minutiae, this is the most widely used, complete, and accessible structural equation modeling (SEM) text. Continuing the tradition of using real data examples from a variety of disciplines, the significantly revised fourth edition incorporates recent developments such as Pearl's graphing theory and the structural causal model (SCM), measurement invariance, and more. Readers gain a comprehensive understanding of all phases of SEM, from data collection and screening to the interpretation and reporting of the results. Learning is enhanced by ex

  12. The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

    Directory of Open Access Journals (Sweden)

    Müller Kai F

    2005-10-01

    Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per

  13. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  14. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  15. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  16. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  17. Modelling oil price volatility with structural breaks

    International Nuclear Information System (INIS)

    Salisu, Afees A.; Fasanya, Ismail O.

    2013-01-01

    In this paper, we provide two main innovations: (i) we analyze oil prices of two prominent markets namely West Texas Intermediate (WTI) and Brent using the two recently developed tests by Narayan and Popp (2010) and Liu and Narayan, 2010 both of which allow for two structural breaks in the data series; and (ii) the latter method is modified to include both symmetric and asymmetric volatility models. We identify two structural breaks that occur in 1990 and 2008 which coincidentally correspond to the Iraqi/Kuwait conflict and the global financial crisis, respectively. We find evidence of persistence and leverage effects in the oil price volatility. While further extensions can be pursued, the consideration of asymmetric effects as well as structural breaks should not be jettisoned when modelling oil price volatility. - Highlights: ► We analyze oil price volatility using NP (2010) and LN (2010) tests. ► We modify the LN (2010) to account for leverage effects in oil price. ► We find two structural breaks that reflect major global crisis in the oil market. ► We find evidence of persistence and leverage effects in oil price volatility. ► Leverage effects and structural breaks are fundamental in oil price modelling.

  18. A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.

    Science.gov (United States)

    Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue

    2014-02-01

    Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. © 2013 Wiley Periodicals, Inc.

  19. Mechanical Model Development for Composite Structural Supercapacitors

    Science.gov (United States)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Santiago, Diana; Bednarcyk, Brett A.

    2016-01-01

    Novel composite structural supercapacitor concepts have recently been developed as a means both to store electrical charge and to provide modest mechanical load carrying capability. Double-layer composite supercapacitors are often fabricated by impregnating a woven carbon fiber fabric, which serves as the electrodes, with a structural polymer electrolyte. Polypropylene or a glass fabric is often used as the separator material. Recent research has been primarily limited to evaluating these composites experimentally. In this study, mechanical models based on the Multiscale Generalized Method of Cells (MSGMC) were developed and used to calculate the shear and tensile properties and response of two composite structural supercapacitors from the literature. The modeling approach was first validated against traditional composite laminate data. MSGMC models for composite supercapacitors were developed, and accurate elastic shear/tensile properties were obtained. It is envisioned that further development of the models presented in this work will facilitate the design of composite components for aerospace and automotive applications and can be used to screen candidate constituent materials for inclusion in future composite structural supercapacitor concepts.

  20. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  1. Structured Event-B Models and Proofs

    DEFF Research Database (Denmark)

    Hallerstede, Stefan

    2010-01-01

    Event-B does not provide specific support for the modelling of problems that require some structuring, such as, local variables or sequential ordering of events. All variables need to be declared globally and sequential ordering of events can only be achieved by abstract program counters. This ha...

  2. AN EFFICIENT STRUCTURAL REANALYSIS MODEL FOR ...

    African Journals Online (AJOL)

    be required if complete and exact analysis would be carried out. This paper ... qualities even under significantly large design modifications. A numerical example has been presented to show potential capabilities of theproposed model. INTRODUCTION ... equilibrium conditions in the structural system and the subsequent ...

  3. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    KAUST Repository

    Komatitsch, Dimitri

    2016-06-13

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  4. An integer programming formulation of the parsimonious loss of heterozygosity problem.

    Science.gov (United States)

    Catanzaro, Daniele; Labbé, Martine; Halldórsson, Bjarni V

    2013-01-01

    A loss of heterozygosity (LOH) event occurs when, by the laws of Mendelian inheritance, an individual should be heterozygote at a given site but, due to a deletion polymorphism, is not. Deletions play an important role in human disease and their detection could provide fundamental insights for the development of new diagnostics and treatments. In this paper, we investigate the parsimonious loss of heterozygosity problem (PLOHP), i.e., the problem of partitioning suspected polymorphisms from a set of individuals into a minimum number of deletion areas. Specifically, we generalize Halldórsson et al.'s work by providing a more general formulation of the PLOHP and by showing how one can incorporate different recombination rates and prior knowledge about the locations of deletions. Moreover, we show that the PLOHP can be formulated as a specific version of the clique partition problem in a particular class of graphs called undirected catch-point interval graphs and we prove its general $({\\cal NP})$-hardness. Finally, we provide a state-of-the-art integer programming (IP) formulation and strengthening valid inequalities to exactly solve real instances of the PLOHP containing up to 9,000 individuals and 3,000 SNPs. Our results give perspectives on the mathematics of the PLOHP and suggest new directions on the development of future efficient exact solution approaches.

  5. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    KAUST Repository

    Komatitsch, Dimitri; Xie, Zhinan; Bozdağ, Ebru; de Andrade, Elliott Sales; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-01-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  6. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    Science.gov (United States)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  7. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    Science.gov (United States)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  8. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  9. Compactness in the Euler-lattice: A parsimonious pitch spelling model

    NARCIS (Netherlands)

    Honingh, A.K.

    2009-01-01

    Compactness and convexity have been shown to represent important principles in music, reflecting a notion of consonance in scales and chords, and have been successfully applied to well-known problems from music research. In this paper, the notion of compactness is applied to the problem of pitch

  10. A parsimonious model to forecast financial distress, based on audit evidence

    Directory of Open Access Journals (Sweden)

    Carlos Piñeiro Sánchez

    2013-01-01

    Full Text Available Este artículo proporciona evidencia de que los informes de auditoría contienen evidencias relevantes para inferir la existencia de disfunciones financieras latentes. A diferencia de trabajos previos, que han estudiado el fallo financiero en grandes empresas cotizadas, fundamentalmente de EE.UU., nuestro trabajo se centra en Pymes españolas sometidas a tensiones financieras latentes. Nuestros resultados indican que la auditoría de las Pymes financieramente desequilibradas posee varias características distintivas: una tasa mayor de rotación de auditores, más informes con salvedades e incumplimientos de los plazos legales para aprobar y registrar las cuentas anuales. Empleamos estas evidencias para construir y verificar un modelo simple, o parsimonioso, capaz de anticipar eficazmente esas disfunciones. Se discuten las implicaciones para la independencia del auditor, la calidad de la información, y el pronóstico del fallo.

  11. A Maximum Parsimony Model to Reconstruct Phylogenetic Network in Honey Bee Evolution

    OpenAIRE

    Usha Chouhan; K. R. Pardasani

    2007-01-01

    Phylogenies ; The evolutionary histories of groups of species are one of the most widely used tools throughout the life sciences, as well as objects of research with in systematic, evolutionary biology. In every phylogenetic analysis reconstruction produces trees. These trees represent the evolutionary histories of many groups of organisms, bacteria due to horizontal gene transfer and plants due to process of hybridization. The process of gene transfer in bacteria and hyb...

  12. Multiple-lesion track-structure model

    International Nuclear Information System (INIS)

    Wilson, J.W.; Cucinotta, F.A.; Shinn, J.L.

    1992-03-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions

  13. Structural Equation Modeling with the Smartpls

    Directory of Open Access Journals (Sweden)

    Christian M. Ringle

    2014-05-01

    Full Text Available The objective of this article is to present a didactic example of Structural Equation Modeling using the software SmartPLS 2.0 M3. The program mentioned uses the method of Partial Least Squares and seeks to address the following situations frequently observed in marketing research: Absence of symmetric distributions of variables measured by a theory still in its beginning phase or with little “consolidation”, formative models, and/or a limited amount of data. The growing use of SmartPLS has demonstrated its robustness and the applicability of the model in the areas that are being studied. 

  14. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  15. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  16. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  17. Flavor structure of warped extra dimension models

    International Nuclear Information System (INIS)

    Agashe, Kaustubh; Perez, Gilad; Soni, Amarjit

    2005-01-01

    We recently showed that warped extra-dimensional models with bulk custodial symmetry and few TeV Kaluza-Klein (KK) masses lead to striking signals at B factories. In this paper, using a spurion analysis, we systematically study the flavor structure of models that belong to the above class. In particular we find that the profiles of the zero modes, which are similar in all these models, essentially control the underlying flavor structure. This implies that our results are robust and model independent in this class of models. We discuss in detail the origin of the signals in B physics. We also briefly study other new physics signatures that arise in rare K decays (K→πνν), in rare top decays [t→cγ(Z,gluon)], and the possibility of CP asymmetries in D 0 decays to CP eigenstates such as K S π 0 and others. Finally we demonstrate that with light KK masses, ∼3 TeV, the above class of models with anarchic 5D Yukawas has a 'CP problem' since contributions to the neutron electric dipole moment are roughly 20 times larger than the current experimental bound. Using AdS/CFT correspondence, these extra-dimensional models are dual to a purely 4D strongly coupled conformal Higgs sector thus enhancing their appeal

  18. Flavor Structure of Warped Extra Dimension Models

    International Nuclear Information System (INIS)

    Agashe, Kaustubh; Perez, Gilad; Soni, Amarjit

    2004-01-01

    We recently showed, in HEP-PH--0406101, that warped extra dimensional models with bulk custodial symmetry and few TeV KK masses lead to striking signals at B-factories. In this paper, using a spurion analysis, we systematically study the flavor structure of models that belong to the above class. In particular we find that the profiles of the zero modes, which are similar in all these models, essentially control the underlying flavor structure. This implies that our results are robust and model independent in this class of models. We discuss in detail the origin of the signals in B-physics. We also briefly study other NP signatures that arise in rare K decays (K → πνν), in rare top decays [t → cγ(Z, gluon)] and the possibility of CP asymmetries in D 0 decays to CP eigenstates such as K s π 0 and others. Finally we demonstrate that with light KK masses, ∼ 3 TeV, the above class of models with anarchic 5D Yukawas has a ''CP problem'' since contributions to the neutron electric dipole moment are roughly 20 times larger than the current experimental bound. Using AdS/CFT correspondence, these extra-dimensional models are dual to a purely 4D strongly coupled conformal Higgs sector thus enhancing their appeal

  19. A micromagnetic study of domain structure modeling

    International Nuclear Information System (INIS)

    Matsuo, Tetsuji; Mimuro, Naoki; Shimasaki, Masaaki

    2008-01-01

    To develop a mesoscopic model for magnetic-domain behavior, a domain structure model (DSM) was examined and compared with a micromagnetic simulation. The domain structure of this model is given by several domains with uniform magnetization vectors and domain walls. The directions of magnetization vectors and the locations of domain walls are determined so as to minimize the magnetic total energy of the magnetic material. The DSM was modified to improve its representation capability for domain behavior. The domain wall energy is multiplied by a vanishing factor to represent the disappearance of magnetic domain. The sequential quadratic programming procedure is divided into two steps to improve an energy minimization process. A comparison with micromagnetic simulation shows that the modified DSM improves the representation accuracy of the magnetization process

  20. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  1. Exploratory structural equation modeling of personality data.

    Science.gov (United States)

    Booth, Tom; Hughes, David J

    2014-06-01

    The current article compares the use of exploratory structural equation modeling (ESEM) as an alternative to confirmatory factor analytic (CFA) models in personality research. We compare model fit, factor distinctiveness, and criterion associations of factors derived from ESEM and CFA models. In Sample 1 (n = 336) participants completed the NEO-FFI, the Trait Emotional Intelligence Questionnaire-Short Form, and the Creative Domains Questionnaire. In Sample 2 (n = 425) participants completed the Big Five Inventory and the depression and anxiety scales of the General Health Questionnaire. ESEM models provided better fit than CFA models, but ESEM solutions did not uniformly meet cutoff criteria for model fit. Factor scores derived from ESEM and CFA models correlated highly (.91 to .99), suggesting the additional factor loadings within the ESEM model add little in defining latent factor content. Lastly, criterion associations of each personality factor in CFA and ESEM models were near identical in both inventories. We provide an example of how ESEM and CFA might be used together in improving personality assessment. © The Author(s) 2014.

  2. Sensitivity of system stability to model structure

    Science.gov (United States)

    Hosack, G.R.; Li, H.W.; Rossignol, P.A.

    2009-01-01

    A community is stable, and resilient, if the levels of all community variables can return to the original steady state following a perturbation. The stability properties of a community depend on its structure, which is the network of direct effects (interactions) among the variables within the community. These direct effects form feedback cycles (loops) that determine community stability. Although feedback cycles have an intuitive interpretation, identifying how they form the feedback properties of a particular community can be intractable. Furthermore, determining the role that any specific direct effect plays in the stability of a system is even more daunting. Such information, however, would identify important direct effects for targeted experimental and management manipulation even in complex communities for which quantitative information is lacking. We therefore provide a method that determines the sensitivity of community stability to model structure, and identifies the relative role of particular direct effects, indirect effects, and feedback cycles in determining stability. Structural sensitivities summarize the degree to which each direct effect contributes to stabilizing feedback or destabilizing feedback or both. Structural sensitivities prove useful in identifying ecologically important feedback cycles within the community structure and for detecting direct effects that have strong, or weak, influences on community stability. The approach may guide the development of management intervention and research design. We demonstrate its value with two theoretical models and two empirical examples of different levels of complexity. ?? 2009 Elsevier B.V. All rights reserved.

  3. Modelling of Radiolytical Proceses in Polystyrenic Structures

    International Nuclear Information System (INIS)

    Postolache, C.

    2006-01-01

    The behavior of polystyrene, poly α-methylstyrene and poly β-methylstyrene structures in ionizing fields was analyzed using computational methods. In this study, the primary radiolytic effect was evaluated using a free radical mechanism. Molecular structures were built and geometrical optimized using quantum-chemical methods. Binding energies for different quantum states and peripheral orbitals distribution were determined. Based on obtained results it was proposed an evaluation model of radiolytical processes in polymers in solid phase. Suggested model suppose to distinguish the dominant processes by binding energies values analysis and LUMO peripheral orbital distribution. Computed binding energies analysis of energetically optimized molecular structures in ionized state (charge +1, multiplicity 2) reveals a high similitude of obtained binding energies for ionized states. The same similitude was observed also in case of total binding energies for neutral state (charge 0, multiplicity 1). Analyzed molecular structures can be associated with ionized molecule state right after one electron capture. This fact suggests that the determined stage of radiolitical fragmentation act is intermediate state of ionized molecule. This molecule captured one electron but it had no necessary time for atoms rearrangement in the molecule for new quantum state. This supposition is in accordance with literature, the time period between excitation act and fragmentation act being lower than 10 - 15 seconds. Based on realized model could be explained the behavior differences of polymeric structures in ionizing radiation field. Preferential fracture of main chains in fragmentation poly α-methylstirene can be explained in accordance with proposed model by C-C from main C bonding energies decreasing in the neighboring of quaternary C

  4. Measuring and modelling the structure of chocolate

    Science.gov (United States)

    Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim

    2015-01-01

    The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.

  5. Tectonic forward modelling of positive inversion structures

    Energy Technology Data Exchange (ETDEWEB)

    Brandes, C. [Leibniz Univ. Hannover (Germany). Inst. fuer Geologie; Schmidt, C. [Landesamt fuer Bergbau, Energie und Geologie (LBEG), Hannover (Germany)

    2013-08-01

    Positive tectonic inversion structures are common features that were recognized in many deformed sedimentary basins (Lowell, 1995). They are characterized by a two phase fault evolution, where initial normal faulting was followed by reverse faulting along the same fault, accompanied by the development of hanging wall deformation. Analysing the evolution of such inversion structures is important for understanding the tectonics of sedimentary basins and the formation of hydrocarbon traps. We used a 2D tectonic forward modelling approach to simulate the stepwise structural evolution of inversion structures in cross-section. The modelling was performed with the software FaultFold Forward v. 6, which is based on trishear kinematics (Zehnder and Allmendinger, 2000). Key aspect of the study was to derive the controlling factors for the geometry of inversion structures. The simulation results show, that the trishear approach is able to reproduce the geometry of tectonic inversion structures in a realistic way. This implies that inversion structures are simply fault-related folds that initiated as extensional fault-propagation folds, which were subsequently transformed into compressional fault-propagation folds when the stress field changed. The hanging wall deformation is a consequence of the decrease in slip towards the tip line of the fault. Trishear angle and propagation-to-slip ratio are the key controlling factors for the geometry of the fault-related deformation. We tested trishear angles in the range of 30 - 60 and propagation-to-slip ratios between 1 and 2 in increments of 0.1. Small trishear angles and low propagation-to-slip ratios produced tight folds, whereas large trishear angles and high propagation-to-slip ratios led to more open folds with concentric shapes. This has a direct effect on the size and geometry of potential hydrocarbon traps. The 2D simulations can be extended to a pseudo 3D approach, where a set of parallel cross-sections is used to describe

  6. The Structure of Preschoolers' Emotion Knowledge: Model Equivalence and Validity Using a Structural Equation Modeling Approach

    Science.gov (United States)

    Bassett, Hideko Hamada; Denham, Susanne; Mincic, Melissa; Graling, Kelly

    2012-01-01

    Research Findings: A theory-based 2-factor structure of preschoolers' emotion knowledge (i.e., recognition of emotional expression and understanding of emotion-eliciting situations) was tested using confirmatory factor analysis. Compared to 1- and 3-factor models, the 2-factor model showed a better fit to the data. The model was found to be…

  7. Database structure for plasma modeling programs

    International Nuclear Information System (INIS)

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  8. Fast loop modeling for protein structures

    Science.gov (United States)

    Zhang, Jiong; Nguyen, Son; Shang, Yi; Xu, Dong; Kosztin, Ioan

    2015-03-01

    X-ray crystallography is the main method for determining 3D protein structures. In many cases, however, flexible loop regions of proteins cannot be resolved by this approach. This leads to incomplete structures in the protein data bank, preventing further computational study and analysis of these proteins. For instance, all-atom molecular dynamics (MD) simulation studies of structure-function relationship require complete protein structures. To address this shortcoming, we have developed and implemented an efficient computational method for building missing protein loops. The method is database driven and uses deep learning and multi-dimensional scaling algorithms. We have implemented the method as a simple stand-alone program, which can also be used as a plugin in existing molecular modeling software, e.g., VMD. The quality and stability of the generated structures are assessed and tested via energy scoring functions and by equilibrium MD simulations. The proposed method can also be used in template-based protein structure prediction. Work supported by the National Institutes of Health [R01 GM100701]. Computer time was provided by the University of Missouri Bioinformatics Consortium.

  9. Detecting Structural Breaks using Hidden Markov Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...

  10. Structural modelling of economic growth: Technological changes

    Directory of Open Access Journals (Sweden)

    Sukharev Oleg

    2016-01-01

    Full Text Available Neoclassical and Keynesian theories of economic growth assume the use of Cobb-Douglas modified functions and other aggregate econometric approaches to growth dynamics modelling. In that case explanations of economic growth are based on the logic of the used mathematical ratios often including the ideas about aggregated values change and factors change a priori. The idea of assessment of factor productivity is the fundamental one among modern theories of economic growth. Nevertheless, structural parameters of economic system, institutions and technological changes are practically not considered within known approaches, though the latter is reflected in the changing parameters of production function. At the same time, on the one hand, the ratio of structural elements determines the future value of the total productivity of the factors and, on the other hand, strongly influences the rate of economic growth and its mode of innovative dynamics. To put structural parameters of economic system into growth models with the possibility of assessment of such modes under conditions of interaction of new and old combinations is an essential step in the development of the theory of economic growth/development. It allows forming stimulation policy of economic growth proceeding from the structural ratios and relations recognized for this economic system. It is most convenient in such models to use logistic functions demonstrating the resource change for old and new combination within the economic system. The result of economy development depends on starting conditions, and on institutional parameters of velocity change of resource borrowing in favour of a new combination and creation of its own resource. Model registration of the resource is carried out through the idea of investments into new and old combinations.

  11. Global plastic models for computerized structural analysis

    International Nuclear Information System (INIS)

    Roche, R.L.; Hoffmann, A.

    1977-01-01

    In many types of structures, it is possible to use generalized stresses (like membrane forces, bending moment, torsion moment...) to define a yield surface for a part of the structure. Analysis can be achieved by using the HILL's principle and a hardening rule. The whole formulation is said 'Global Plastic Model'. Two different global models are used in the CEASEMT system for structural analysis, one for shell analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses chosen are the membrane forces and bending (including torsion) moments. There is only one yield condition for a normal to the middle surface and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is bending moments, torsional moment, hoop stress and tension stress. There is only a set of stresses for a cross section and no integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic function of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield functions used. Some examples of applications in structural analysis are added to the text

  12. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  13. Linking advanced fracture models to structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chiesa, Matteo

    2001-07-01

    Shell structures with defects occur in many situations. The defects are usually introduced during the welding process necessary for joining different parts of the structure. Higher utilization of structural materials leads to a need for accurate numerical tools for reliable prediction of structural response. The direct discretization of the cracked shell structure with solid finite elements in order to perform an integrity assessment of the structure in question leads to large size problems, and makes such analysis infeasible in structural application. In this study a link between local material models and structural analysis is outlined. An ''ad hoc'' element formulation is used in order to connect complex material models to the finite element framework used for structural analysis. An improved elasto-plastic line spring finite element formulation, used in order to take cracks into account, is linked to shell elements which are further linked to beam elements. In this way one obtain a global model of the shell structure that also accounts for local flexibilities and fractures due to defects. An important advantage with such an approach is a direct fracture mechanics assessment e.g. via computed J-integral or CTOD. A recent development in this approach is the notion of two-parameter fracture assessment. This means that the crack tip stress tri-axiality (constraint) is employed in determining the corresponding fracture toughness, giving a much more realistic capacity of cracked structures. The present thesis is organized in six research articles and an introductory chapter that reviews important background literature related to this work. Paper I and II address the performance of shell and line spring finite elements as a cost effective tool for performing the numerical calculation needed to perform a fracture assessment. In Paper II a failure assessment, based on the testing of a constraint-corrected fracture mechanics specimen under tension, is

  14. Modeling repetitive motions using structured light.

    Science.gov (United States)

    Xu, Yi; Aliaga, Daniel G

    2010-01-01

    Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

  15. Expansion of IFC model with structural sensors

    Directory of Open Access Journals (Sweden)

    Rio, J.

    2013-06-01

    Full Text Available The instrumentation and structural health monitoring, SHM, of buildings is a growing field in the construction industry. The goal of this research work is to explore ways of modeling SHM systems, and the resulting data collected from buildings, in standard information management system such as Building Information Models, BIM. These models need to be stored in digital databases with structures suitable for the specific building related information. In this work the Industry Foundation Classes, IFC, data model was used. A case study is presented to assess the applicability of the present IFC standard as a tool to build a three-dimensional digital model of a real instrumented building, as well as some of the structural sensors and their results. The interoperability of the digital model was verified by using different modeling, viewing and analysis software tools. Limitations of the current IFC model were explored and extensions to the sensor classes are proposed.La instrumentación y monitorización de la salud estructural de edificios, SHM, es un campo creciente en la industria de la construcción. El objetivo del presente trabajo es estudiar la modelación de sistemas SHM tomados de edificios en un modelo digital BIM e la sua integración de datos. Estos modelos deben almacenarse en bases de datos con una estructura apropiada para albergar información específica relacionada con la construcción. En este trabajo se utilizó el estándar Industry Foundation Classes, IFC. Se presenta un estudio de caso para evaluar la norma IFC como herramienta para modelar un edificio real instrumentado, así como algunos sensores estruturales e sus resultados. La inter-operatividad de lo modelo digital se ha comprobado mediante el uso de diferentes herramientas de software de modelación, visualización y análisis. Se exploran además limitaciones del modelo IFC y se proponen extensiones de las clases de sensores.

  16. Meta-analytic structural equation modelling

    CERN Document Server

    Jak, Suzanne

    2015-01-01

    This book explains how to employ MASEM, the combination of meta-analysis (MA) and structural equation modelling (SEM). It shows how by using MASEM, a single model can be tested to explain the relationships between a set of variables in several studies. This book gives an introduction to MASEM, with a focus on the state of the art approach: the two stage approach of Cheung and Cheung & Chan. Both, the fixed and the random approach to MASEM are illustrated with two applications to real data. All steps that have to be taken to perform the analyses are discussed extensively. All data and syntax files are available online, so that readers can imitate all analyses. By using SEM for meta-analysis, this book shows how to benefit from all available information from all available studies, even if few or none of the studies report about all relationships that feature in the full model of interest.

  17. Structural model for fluctuations in financial markets

    Science.gov (United States)

    Anand, Kartik; Khedair, Jonathan; Kühn, Reimer

    2018-05-01

    In this paper we provide a comprehensive analysis of a structural model for the dynamics of prices of assets traded in a market which takes the form of an interacting generalization of the geometric Brownian motion model. It is formally equivalent to a model describing the stochastic dynamics of a system of analog neurons, which is expected to exhibit glassy properties and thus many metastable states in a large portion of its parameter space. We perform a generating functional analysis, introducing a slow driving of the dynamics to mimic the effect of slowly varying macroeconomic conditions. Distributions of asset returns over various time separations are evaluated analytically and are found to be fat-tailed in a manner broadly in line with empirical observations. Our model also allows us to identify collective, interaction-mediated properties of pricing distributions and it predicts pricing distributions which are significantly broader than their noninteracting counterparts, if interactions between prices in the model contain a ferromagnetic bias. Using simulations, we are able to substantiate one of the main hypotheses underlying the original modeling, viz., that the phenomenon of volatility clustering can be rationalized in terms of an interplay between the dynamics within metastable states and the dynamics of occasional transitions between them.

  18. Job durations and the job search model : a two-country, multi-sample analysis

    OpenAIRE

    Bagger, Jesper; Henningsen, Morten

    2008-01-01

    Abstract: This paper assesses whether a parsimonious partial equilibrium job search model with on-the-job search can reproduce observed job durations and transitions to other jobs and to nonemployment. We allow for unobserved heterogeneity across individuals in key structural parameters. Observed heterogeneity and life cycle effects are accounted for by estimating separate models for flow samples of labor market entrants and stock samples of “mature” workers with 10-11 years of...

  19. Structural equation modeling and natural systems

    Science.gov (United States)

    Grace, James B.

    2006-01-01

    This book, first published in 2006, presents an introduction to the methodology of structural equation modeling, illustrates its use, and goes on to argue that it has revolutionary implications for the study of natural systems. A major theme of this book is that we have, up to this point, attempted to study systems primarily using methods (such as the univariate model) that were designed only for considering individual processes. Understanding systems requires the capacity to examine simultaneous influences and responses. Structural equation modeling (SEM) has such capabilities. It also possesses many other traits that add strength to its utility as a means of making scientific progress. In light of the capabilities of SEM, it can be argued that much of ecological theory is currently locked in an immature state that impairs its relevance. It is further argued that the principles of SEM are capable of leading to the development and evaluation of multivariate theories of the sort vitally needed for the conservation of natural systems.

  20. Comparison of perceived value structural models

    Directory of Open Access Journals (Sweden)

    Sunčana Piri Rajh

    2012-07-01

    Full Text Available Perceived value has been considered an important determinant of consumer shopping behavior and studied as such for a long period of time. According to one research stream, perceived value is a variable determined by perceived quality and perceived sacrifice. Another research stream suggests that the perception of value is a result of the consumer risk perception. This implies the presence of two somewhat independent research streams that are integrated by a third research stream – the one suggesting that perceived value is a result of perceived quality and perceived sacrifices while perceived (performance and financial risk mediates the relationship between perceived quality and perceived sacrifices on the one hand, and perceived value on the other. This paper describes the three approaches (models that have been mentioned. The aim of the paper is to determine which of the observed models show the most acceptable level of fit to the empirical data. Using the survey method, research involving three product categories has been conducted on a sample of Croatian consumers. Collected data was analyzed by the structural equation modeling (SEM method. Research has shown an appropriate level of fit of each observed model to the empirical data. However, the model measuring the effect of perceived risk on perceived value indicates the best level of fit, which implies that perceived performance risk and perceived financial risk are the best predictors of perceived value.

  1. Structural Model of psychological risk and protective factors affecting on quality of life in patients with coronary heart disease: A psychocardiology model

    Directory of Open Access Journals (Sweden)

    Zohreh Khayyam Nekouei

    2014-01-01

    Full Text Available Background: Conducted researches show that psychological factors may have a very important role in the etiology, continuity and consequences of coronary heart diseases. This study has drawn the psychological risk and protective factors and their effects in patients with coronary heart diseases (CHD in a structural model. It aims to determine the structural relations between psychological risk and protective factors with quality of life in patients with coronary heart disease. Materials and Methods: The present cross-sectional and correlational studies were conducted using structural equation modeling. The study sample included 398 patients of coronary heart disease in the university referral Hospital, as well as other city health care centers in Isfahan city. They were selected based on random sampling method. Then, in case, they were executed the following questionnaires: Coping with stressful situations (CISS- 21, life orientation (LOT-10, general self-efficacy (GSE-10, depression, anxiety and stress (DASS-21, perceived stress (PSS-14, multidimensional social support (MSPSS-12, alexithymia (TAS-20, spiritual intelligence (SQ-23 and quality of life (WHOQOL-26. Results: The results showed that protective and risk factors could affect the quality of life in patients with CHD with factor loadings of 0.35 and −0.60, respectively. Moreover, based on the values of the framework of the model such as relative chi-square (CMIN/DF = 3.25, the Comparative Fit Index (CFI = 0.93, the Parsimony Comparative Fit Index (PCFI = 0.68, the Root Mean Square Error of Approximation (RMSEA = 0.07 and details of the model (significance of the relationships it has been confirmed that the psychocardiological structural model of the study is the good fitting model. Conclusion: This study was among the first to research the different psychological risk and protective factors of coronary heart diseases in the form of a structural model. The results of this study have

  2. Structural Model Error and Decision Relevancy

    Science.gov (United States)

    Goldsby, M.; Lusk, G.

    2017-12-01

    The extent to which climate models can underwrite specific climate policies has long been a contentious issue. Skeptics frequently deny that climate models are trustworthy in an attempt to undermine climate action, whereas policy makers often desire information that exceeds the capabilities of extant models. While not skeptics, a group of mathematicians and philosophers [Frigg et al. (2014)] recently argued that even tiny differences between the structure of a complex dynamical model and its target system can lead to dramatic predictive errors, possibly resulting in disastrous consequences when policy decisions are based upon those predictions. They call this result the Hawkmoth effect (HME), and seemingly use it to rebuke rightwing proposals to forgo mitigation in favor of adaptation. However, a vigorous debate has emerged between Frigg et al. on one side and another philosopher-mathematician pair [Winsberg and Goodwin (2016)] on the other. On one hand, Frigg et al. argue that their result shifts the burden to climate scientists to demonstrate that their models do not fall prey to the HME. On the other hand, Winsberg and Goodwin suggest that arguments like those asserted by Frigg et al. can be, if taken seriously, "dangerous": they fail to consider the variety of purposes for which models can be used, and thus too hastily undermine large swaths of climate science. They put the burden back on Frigg et al. to show their result has any effect on climate science. This paper seeks to attenuate this debate by establishing an irenic middle position; we find that there is more agreement between sides than it first seems. We distinguish a `decision standard' from a `burden of proof', which helps clarify the contributions to the debate from both sides. In making this distinction, we argue that scientists bear the burden of assessing the consequences of HME, but that the standard Frigg et al. adopt for decision relevancy is too strict.

  3. Ice films follow structure zone model morphologies

    International Nuclear Information System (INIS)

    Cartwright, Julyan H.E.; Escribano, Bruno; Sainz-Diaz, C. Ignacio

    2010-01-01

    Ice films deposited at temperatures of 6-220 K and at low pressures in situ in a cryo-environmental scanning electron microscope show pronounced morphologies at the mesoscale consistent with the structure zone model of film growth. Water vapour was injected directly inside the chamber at ambient pressures ranging from 10 -4 Pa to 10 2 Pa. Several different substrates were used to exclude the influence of their morphology on the grown films. At the lowest temperatures the ice, which under these conditions is amorphous on the molecular scale, shows the mesoscale morphologies typical of the low-temperature zones of the structure zone model (SZM), including cauliflower, transition, spongelike and matchstick morphologies. Our experiments confirm that the SZM is independent of the chemical nature of the adsorbate, although the intermolecular interactions in water (hydrogen bonds) are different to those in ceramics or metals. At higher temperatures, on the other hand, where the ice is hexagonal crystalline on the molecular scale, it displays a complex palmlike morphology on the mesoscale.

  4. Ice films follow structure zone model morphologies

    Energy Technology Data Exchange (ETDEWEB)

    Cartwright, Julyan H.E. [Instituto Andaluz de Ciencias de la Tierra, CSIC-Universidad de Granada, E-18071 Granada (Spain); Escribano, Bruno, E-mail: bruno.escribano.salazar@gmail.co [Instituto Andaluz de Ciencias de la Tierra, CSIC-Universidad de Granada, E-18071 Granada (Spain); Sainz-Diaz, C. Ignacio [Instituto Andaluz de Ciencias de la Tierra, CSIC-Universidad de Granada, E-18071 Granada (Spain)

    2010-04-02

    Ice films deposited at temperatures of 6-220 K and at low pressures in situ in a cryo-environmental scanning electron microscope show pronounced morphologies at the mesoscale consistent with the structure zone model of film growth. Water vapour was injected directly inside the chamber at ambient pressures ranging from 10{sup -4} Pa to 10{sup 2} Pa. Several different substrates were used to exclude the influence of their morphology on the grown films. At the lowest temperatures the ice, which under these conditions is amorphous on the molecular scale, shows the mesoscale morphologies typical of the low-temperature zones of the structure zone model (SZM), including cauliflower, transition, spongelike and matchstick morphologies. Our experiments confirm that the SZM is independent of the chemical nature of the adsorbate, although the intermolecular interactions in water (hydrogen bonds) are different to those in ceramics or metals. At higher temperatures, on the other hand, where the ice is hexagonal crystalline on the molecular scale, it displays a complex palmlike morphology on the mesoscale.

  5. Modeling Insurgent Network Structure and Dynamics

    Science.gov (United States)

    Gabbay, Michael; Thirkill-Mackelprang, Ashley

    2010-03-01

    We present a methodology for mapping insurgent network structure based on their public rhetoric. Indicators of cooperative links between insurgent groups at both the leadership and rank-and-file levels are used, such as joint policy statements or joint operations claims. In addition, a targeting policy measure is constructed on the basis of insurgent targeting claims. Network diagrams which integrate these measures of insurgent cooperation and ideology are generated for different periods of the Iraqi and Afghan insurgencies. The network diagrams exhibit meaningful changes which track the evolution of the strategic environment faced by insurgent groups. Correlations between targeting policy and network structure indicate that insurgent targeting claims are aimed at establishing a group identity among the spectrum of rank-and-file insurgency supporters. A dynamical systems model of insurgent alliance formation and factionalism is presented which evolves the relationship between insurgent group dyads as a function of their ideological differences and their current relationships. The ability of the model to qualitatively and quantitatively capture insurgent network dynamics observed in the data is discussed.

  6. The Model of Complex Structure of Quark

    Science.gov (United States)

    Liu, Rongwu

    2017-09-01

    In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of ``asymptotic freedom'' and ``quark confinement''. In order to reveal the nature of strong interaction, the ``bag'' model of vacuum and the ``string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the ``nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of ``asymptotic freedom'' and ``quark confinement'' are the result of color field force and deformation force.

  7. Aerospace structural design process improvement using systematic evolutionary structural modeling

    Science.gov (United States)

    Taylor, Robert Michael

    2000-10-01

    A multidisciplinary team tasked with an aircraft design problem must understand the problem requirements and metrics to produce a successful design. This understanding entails not only knowledge of what these requirements and metrics are, but also how they interact, which are most important (to the customer as well as to aircraft performance), and who in the organization can provide pertinent knowledge for each. In recent years, product development researchers and organizations have developed and successfully applied a variety of tools such as Quality Function Deployment (QFD) to coordinate multidisciplinary team members. The effectiveness of these methods, however, depends on the quality and fidelity of the information that team members can input. In conceptual aircraft design, structural information is of lower quality compared to aerodynamics or performance because it is based on experience rather than theory. This dissertation shows how advanced structural design tools can be used in a multidisciplinary team setting to improve structural information generation and communication through a systematic evolution of structural detail. When applied to conceptual design, finite element-based structural design tools elevate structural information to the same level as other computationally supported disciplines. This improved ability to generate and communicate structural information enables a design team to better identify and meet structural design requirements, consider producibility issues earlier, and evaluate structural concepts. A design process experiment of a wing structural layout in collaboration with an industrial partner illustrates and validates the approach.

  8. Modelling the Covariance Structure in Marginal Multivariate Count Models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Olivero, J.; Grande-Vega, M.

    2017-01-01

    The main goal of this article is to present a flexible statistical modelling framework to deal with multivariate count data along with longitudinal and repeated measures structures. The covariance structure for each response variable is defined in terms of a covariance link function combined...... be used to indicate whether there was statistical evidence of a decline in blue duikers and other species hunted during the study period. Determining whether observed drops in the number of animals hunted are indeed true is crucial to assess whether species depletion effects are taking place in exploited...... with a matrix linear predictor involving known matrices. In order to specify the joint covariance matrix for the multivariate response vector, the generalized Kronecker product is employed. We take into account the count nature of the data by means of the power dispersion function associated with the Poisson...

  9. Structural equation models from paths to networks

    CERN Document Server

    Westland, J Christopher

    2015-01-01

    This compact reference surveys the full range of available structural equation modeling (SEM) methodologies.  It reviews applications in a broad range of disciplines, particularly in the social sciences where many key concepts are not directly observable.  This is the first book to present SEM’s development in its proper historical context–essential to understanding the application, strengths and weaknesses of each particular method.  This book also surveys the emerging path and network approaches that complement and enhance SEM, and that will grow in importance in the near future.  SEM’s ability to accommodate unobservable theory constructs through latent variables is of significant importance to social scientists.  Latent variable theory and application are comprehensively explained, and methods are presented for extending their power, including guidelines for data preparation, sample size calculation, and the special treatment of Likert scale data.  Tables of software, methodologies and fit st...

  10. Modelling of Dampers and Damping in Structures

    DEFF Research Database (Denmark)

    Høgsberg, Jan Riess

    2006-01-01

    and the maximum attainable damping are found by maximizing the expression for the damping ratio. The theory is formulated for linear damper models, but may also be applied for non-linear dampers in terms of equivalent linear parameters for stiffness and damping, respectively. The format of the expressions......, and thereby the damping, of flexible structures are generally described in terms of the dominant vibration modes. A system reduction technique, where the damped vibration mode is constructed as a linear combination of the undamped mode shape and the mode shape obtained by locking the damper, is applied....... This two-component representation leads to a simple solution for the modal damping representing the natural frequency and the associated damping ratio. It appears from numerical examples that this system reduction technique provides very accurate results. % Analytical expressions for the optimal tuning...

  11. Structured analysis and modeling of complex systems

    Science.gov (United States)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  12. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  13. Global plastic models for computerized structural analysis

    International Nuclear Information System (INIS)

    Roche, R.; Hoffmann, A.

    1977-01-01

    Two different global models are used in the CEASEMT system for structural analysis, one for the shells analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses choosed are the membrane forces Nsub(ij) and bending (including torsion) moments Msub(ij). There is only one yield condition for a normal (to the middle surface) and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is: bending moments, torsional moments, Hoop stress and tension stress. There is only a set of stresses for a cross section and non integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic fonction of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield fonctions used. Some examples of applications in structural analysis are added to the text [fr

  14. 3D-DART: a DNA structure modelling server

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2009-01-01

    There is a growing interest in structural studies of DNA by both experimental and computational approaches. Often, 3D-structural models of DNA are required, for instance, to serve as templates for homology modeling, as starting structures for macro-molecular docking or as scaffold for NMR structure

  15. Stability patterns for a size-structured population model and its stage-structured counterpart

    DEFF Research Database (Denmark)

    Zhang, Lai; Pedersen, Michael; Lin, Zhigui

    2015-01-01

    In this paper we compare a general size-structured population model, where a size-structured consumer feeds upon an unstructured resource, to its simplified stage-structured counterpart in terms of equilibrium stability. Stability of the size-structured model is understood in terms of an equivale...... to the population level....

  16. Patterns and effects of GC3 heterogeneity and parsimony informative sites on the phylogenetic tree of genes.

    Science.gov (United States)

    Ma, Shuai; Wu, Qi; Hu, Yibo; Wei, Fuwen

    2018-05-20

    The explosive growth in genomic data has provided novel insights into the conflicting signals hidden in phylogenetic trees. Although some studies have explored the effects of the GC content and parsimony informative sites (PIS) on the phylogenetic tree, the effect of the heterogeneity of the GC content at the first/second/third codon position on parsimony informative sites (GC1/2/3 PIS ) among different species and the effect of PIS on phylogenetic tree construction remain largely unexplored. Here, we used two different mammal genomic datasets to explore the patterns of GC1/2/3 PIS heterogeneity and the effect of PIS on the phylogenetic tree of genes: (i) all GC1/2/3 PIS have obvious heterogeneity between different mammals, and the levels of heterogeneity are GC3 PIS  > GC2 PIS  > GC1 PIS ; (ii) the number of PIS is positively correlated with the metrics of "good" gene tree topologies, and excluding the third codon position (C3) decreases the quality of gene trees by removing too many PIS. These results provide novel insights into the heterogeneity pattern of GC1/2/3 PIS in mammals and the relationship between GC3/PIS and gene trees. Additionally, it is necessary to carefully consider whether to exclude C3 to improve the quality of gene trees, especially in the super-tree method. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  18. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  19. Modelling structural systems for transient response analysis

    International Nuclear Information System (INIS)

    Melosh, R.J.

    1975-01-01

    This paper introduces and reports success of a direct means of determining the time periods in which a structural system behaves as a linear system. Numerical results are based on post fracture transient analyses of simplified nuclear piping systems. Knowledge of the linear response ranges will lead to improved analysis-test correlation and more efficient analyses. It permits direct use of data from physical tests in analysis and simplication of the analytical model and interpretation of its behavior. The paper presents a procedure for deducing linearity based on transient responses. Given the forcing functions and responses of discrete points of the system at various times, the process produces evidence of linearity and quantifies an adequate set of equations of motion. Results of use of the process with linear and nonlinear analyses of piping systems with damping illustrate its success. Results cover the application to data from mathematical system responses. The process is successfull with mathematical models. In loading ranges in which all modes are excited, eight digit accuracy of predictions are obtained from the equations of motion deduced. Small changes (less than 0.01%) in the norm of the transfer matrices are produced by manipulation errors for linear systems yielding evidence that nonlinearity is easily distinguished. Significant changes (greater than five %) are coincident with relatively large norms of the equilibrium correction vector in nonlinear analyses. The paper shows that deducing linearity and, when admissible, quantifying linear equations of motion from transient response data for piping systems can be achieved with accuracy comparable to that of response data

  20. Homogenization models for 2-D grid structures

    Science.gov (United States)

    Banks, H. T.; Cioranescu, D.; Rebnord, D. A.

    1992-01-01

    In the past several years, we have pursued efforts related to the development of accurate models for the dynamics of flexible structures made of composite materials. Rather than viewing periodicity and sparseness as obstacles to be overcome, we exploit them to our advantage. We consider a variational problem on a domain that has large, periodically distributed holes. Using homogenization techniques we show that the solution to this problem is in some topology 'close' to the solution of a similar problem that holds on a much simpler domain. We study the behavior of the solution of the variational problem as the holes increase in number, but decrease in size in such a way that the total amount of material remains constant. The result is an equation that is in general more complex, but with a domain that is simply connected rather than perforated. We study the limit of the solution as the amount of material goes to zero. This second limit will, in most cases, retrieve much of the simplicity that was lost in the first limit without sacrificing the simplicity of the domain. Finally, we show that these results can be applied to the case of a vibrating Love-Kirchhoff plate with Kelvin-Voigt damping. We rely heavily on earlier results of (Du), (CS) for the static, undamped Love-Kirchhoff equation. Our efforts here result in a modification of those results to include both time dependence and Kelvin-Voigt damping.

  1. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  2. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  3. Vibration modeling of structural fuzzy with continuous boundary

    DEFF Research Database (Denmark)

    Friis, Lars; Ohlrich, Mogens

    2008-01-01

    a multitude of different sprung masses each strongly resisting any motion of the main structure (master) at their base antiresonance. The “theory of structural fuzzy” is intended for modeling such high damping. In the present article the theory of fuzzy structures is briefly outlined and a method of modeling...

  4. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  5. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  6. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  7. Conservation of concrete structures according to fib Model Code 2010

    NARCIS (Netherlands)

    Matthews, S.; Bigaj-Van Vliet, A.; Ueda, T.

    2013-01-01

    Conservation of concrete structures forms an essential part of the fib Model Code for Concrete Structures 2010 (fib Model Code 2010). In particular, Chapter 9 of fib Model Code 2010 addresses issues concerning conservation strategies and tactics, conservation management, condition surveys, condition

  8. Modeling the Structure and Complexity of Engineering Routine Design Problems

    NARCIS (Netherlands)

    Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.

    2011-01-01

    This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development

  9. Stability and the structure of continuous-time economic models

    NARCIS (Netherlands)

    Nieuwenhuis, H.J.; Schoonbeek, L.

    In this paper we investigate the relationship between the stability of macroeconomic, or macroeconometric, continuous-time models and the structure of the matrices appearing in these models. In particular, we concentrate on dominant-diagonal structures. We derive general stability results for models

  10. Integrated corporate structure life cycle management modeling and organization

    OpenAIRE

    Naumenko, M.; Morozova, L.

    2011-01-01

    Integrated business structure presented as complementary pool of its participants skills. The methodical approach to integrated business structure life cycle modeling proposed. Recommendations of enterprises life cycles stages correlate are submitted.

  11. Quantifying and modeling soil structure dynamics

    Science.gov (United States)

    Characterization of soil structure has been a topic of scientific discussions ever since soil structure has been recognized as an important factor affecting soil physical, mechanical, chemical, and biological processes. Beyond semi-quantitative soil morphology classes, it is a challenge to describe ...

  12. Latent Growth and Dynamic Structural Equation Models.

    Science.gov (United States)

    Grimm, Kevin J; Ram, Nilam

    2018-05-07

    Latent growth models make up a class of methods to study within-person change-how it progresses, how it differs across individuals, what are its determinants, and what are its consequences. Latent growth methods have been applied in many domains to examine average and differential responses to interventions and treatments. In this review, we introduce the growth modeling approach to studying change by presenting different models of change and interpretations of their model parameters. We then apply these methods to examining sex differences in the development of binge drinking behavior through adolescence and into adulthood. Advances in growth modeling methods are then discussed and include inherently nonlinear growth models, derivative specification of growth models, and latent change score models to study stochastic change processes. We conclude with relevant design issues of longitudinal studies and considerations for the analysis of longitudinal data.

  13. A spatial structural derivative model for ultraslow diffusion

    Directory of Open Access Journals (Sweden)

    Xu Wei

    2017-01-01

    Full Text Available This study investigates the ultraslow diffusion by a spatial structural derivative, in which the exponential function ex is selected as the structural function to construct the local structural derivative diffusion equation model. The analytical solution of the diffusion equation is a form of Biexponential distribution. Its corresponding mean squared displacement is numerically calculated, and increases more slowly than the logarithmic function of time. The local structural derivative diffusion equation with the structural function ex in space is an alternative physical and mathematical modeling model to characterize a kind of ultraslow diffusion.

  14. Correlated binomial models and correlation structures

    International Nuclear Information System (INIS)

    Hisakado, Masato; Kitsukawa, Kenji; Mori, Shintaro

    2006-01-01

    We discuss a general method to construct correlated binomial distributions by imposing several consistent relations on the joint probability function. We obtain self-consistency relations for the conditional correlations and conditional probabilities. The beta-binomial distribution is derived by a strong symmetric assumption on the conditional correlations. Our derivation clarifies the 'correlation' structure of the beta-binomial distribution. It is also possible to study the correlation structures of other probability distributions of exchangeable (homogeneous) correlated Bernoulli random variables. We study some distribution functions and discuss their behaviours in terms of their correlation structures

  15. Dependent defaults and losses with factor copula models

    Directory of Open Access Journals (Sweden)

    Ackerer Damien

    2017-12-01

    Full Text Available We present a class of flexible and tractable static factor models for the term structure of joint default probabilities, the factor copula models. These high-dimensional models remain parsimonious with paircopula constructions, and nest many standard models as special cases. The loss distribution of a portfolio of contingent claims can be exactly and efficiently computed when individual losses are discretely supported on a finite grid. Numerical examples study the key features affecting the loss distribution and multi-name credit derivatives prices. An empirical exercise illustrates the flexibility of our approach by fitting credit index tranche prices.

  16. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  17. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  18. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  19. Accurate protein structure modeling using sparse NMR data and homologous structure information.

    Science.gov (United States)

    Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David

    2012-06-19

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.

  20. Predictive modeling of pedestal structure in KSTAR using EPED model

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hyunsun; Kim, J. Y. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Kwon, Ohjin [Department of Physics, Daegu University, Gyeongbuk 712-714 (Korea, Republic of)

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  1. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  2. Port pricing : principles, structure and models

    OpenAIRE

    Meersman, Hilde; Strandenes, Siri Pettersen; Van de Voorde, Eddy

    2014-01-01

    Price level and price transparency are input to shippers’ choice of supply chain and transport mode. In this paper, we analyse current port pricing structures in the light of the pricing literature and consider opportunities for improvement. We present a detailed overview of pricing criteria, who sets prices and who ultimately foots the bill for port-of-call charges, cargo-handling fees and congestion charges. Current port pricing practice is based on a rather linear structure and fails to in...

  3. Structured inverse modeling in parabolic diffusion processess

    OpenAIRE

    Schulz, Volker; Siebenborn, Martin; Welker, Kathrin

    2014-01-01

    Often, the unknown diffusivity in diffusive processes is structured by piecewise constant patches. This paper is devoted to efficient methods for the determination of such structured diffusion parameters by exploiting shape calculus. A novel shape gradient is derived in parabolic processes. Furthermore quasi-Newton techniques are used in order to accelerate shape gradient based iterations in shape space. Numerical investigations support the theoretical results.

  4. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  5. Structural Adjustment Policy Experiments: The Use of Philippine CGE Models

    OpenAIRE

    Cororaton, Caesar B.

    1994-01-01

    This paper reviews the general structure of the following general computable general equilibrium (CGE): the APEX model, Habito’s second version of the PhilCGE model, Cororaton’s CGE model and Bautista’s first CGE model. These models are chosen as they represent the range of recently constructed CGE models of the Philippine economy. They also represent two schools of thought in CGE modeling: the well defined neoclassical, Walrasian, general equilibrium school where the market-clearing variable...

  6. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    OpenAIRE

    L. Gong; S. Halldin; C.-Y. Xu

    2010-01-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms...

  7. An elastic-plastic contact model for line contact structures

    Science.gov (United States)

    Zhu, Haibin; Zhao, Yingtao; He, Zhifeng; Zhang, Ruinan; Ma, Shaopeng

    2018-06-01

    Although numerical simulation tools are now very powerful, the development of analytical models is very important for the prediction of the mechanical behaviour of line contact structures for deeply understanding contact problems and engineering applications. For the line contact structures widely used in the engineering field, few analytical models are available for predicting the mechanical behaviour when the structures deform plastically, as the classic Hertz's theory would be invalid. Thus, the present study proposed an elastic-plastic model for line contact structures based on the understanding of the yield mechanism. A mathematical expression describing the global relationship between load history and contact width evolution of line contact structures was obtained. The proposed model was verified through an actual line contact test and a corresponding numerical simulation. The results confirmed that this model can be used to accurately predict the elastic-plastic mechanical behaviour of a line contact structure.

  8. Modeling Delamination of Interfacial Corner Cracks in Multilayered Structures

    DEFF Research Database (Denmark)

    Veluri, Badrinath (Badri); Jensen, Henrik Myhre

    2013-01-01

    Multilayered electronic components, typically of heterogeneous materials, delaminate under thermal and mechanical loading. A phenomenological model focused on modeling the shape of such interface cracks close to corners in layered interconnect structures for calculating the critical stress...

  9. The Structured Intuitive Model for Product Line Economics (SIMPLE)

    National Research Council Canada - National Science Library

    Clements, Paul C; McGregor, John D; Cohen, Sholom G

    2005-01-01

    .... This report presents the Structured Intuitive Model of Product Line Economics (SIMPLE), a general-purpose business model that supports the estimation of the costs and benefits in a product line development organization...

  10. Configurational Model for Conductivity of Stabilized Fluorite Structure Oxides

    DEFF Research Database (Denmark)

    Poulsen, Finn Willy

    1981-01-01

    The formalism developed here furnishes means by which ionic configurations, solid solution limits, and conductivity mechanisms in doped fluorite structures can be described. The present model differs markedly from previous models but reproduces qualitatively reality. The analysis reported...

  11. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  12. A tutorial on fundamental model structures for railway timetable optimization

    DEFF Research Database (Denmark)

    Harrod, Steven

    2012-01-01

    This guide explains the role of railway timetables relative to all other railway scheduling activities, and then presents four fundamental timetable formulations suitable for optimization. Timetabling models may be classified according to whether they explicitly model the track structure, and whe......This guide explains the role of railway timetables relative to all other railway scheduling activities, and then presents four fundamental timetable formulations suitable for optimization. Timetabling models may be classified according to whether they explicitly model the track structure...

  13. Physical Modelling of Geotechnical Structures in Ports and Offshore

    Directory of Open Access Journals (Sweden)

    Bałachowski Lech

    2017-04-01

    Full Text Available The physical modelling of subsoil behaviour and soil-structure interaction is essential for the proper design of offshore structures and port infrastructure. A brief introduction to such modelling of geoengineering problems is presented and some methods and experimental devices are described. The relationships between modelling scales are given. Some examples of penetration testing results in centrifuge and calibration chamber are presented. Prospects for physical modelling in geotechnics are also described.

  14. Model Servqual Dengan Pendekatan Structural Equation Modeling (Studi Pada Mahasiswa Sistem Informasi)

    OpenAIRE

    Nurfaizal, Yusmedi

    2015-01-01

    Penelitian ini berjudul “MODEL SERVQUAL DENGAN PENDEKATAN STRUCTURAL EQUATION MODELING (Studi Pada Mahasiswa Sistem Informasi)”. Tujuan penelitian ini adalah untuk mengetahui model Servqual dengan pendekatan Structural Equation Modeling pada mahasiswa sistem informasi. Peneliti memutuskan untuk mengambil sampel sebanyak 100 responden. Untuk menguji model digunakan analisis SEM. Hasil penelitian menunjukkan bahwa tangibility, reliability responsiveness, assurance dan emphaty mempunyai pengaruh...

  15. Structure ignition assessment model (SIAM)\\t

    Science.gov (United States)

    Jack D. Cohen

    1995-01-01

    Major wildland/urban interface fire losses, principally residences, continue to occur. Although the problem is not new, the specific mechanisms are not well known on how structures ignite in association with wildland fires. In response to the need for a better understanding of wildland/urban interface ignition mechanisms and a method of assessing the ignition risk,...

  16. New rheological model for concrete structural analysis

    International Nuclear Information System (INIS)

    Chern, J.C.

    1984-01-01

    Long time deformation is of interest in estimating stresses of the prestressed concrete reactor vessel, in predicting cracking due to shrinkage or thermal dilatation, and in the design of leak-tight structures. Many interacting influences exist among creep, shrinkage and cracking for concrete. An interaction which researchers have long observed, is that at simultaneous drying and loading, the deformation of a concrete structure under the combined effect is larger than the sum of the shrinkage deformation of the structure at no load and the deformation of the sealed structure. The excess deformation due to the difference between observed test data and conventional analysis is regarded as the Pickett Effect. A constitutive relation explaining the Pickett Effect and other similar superposition problems, which includes creep, shrinkage (or thermal dilation), cracking, aging was developed with an efficient time-step numerical algorithm. The total deformation in the analysis is the sum of strain due to elastic deformation and creep, cracking and shrinkage with thermal dilatation. Instead of a sudden stress reduction to zero after the attainment of the strength limit, the gradual strain-softening of concrete (a gradual decline of stress at increasing strain) is considered

  17. CHEMICAL STRUCTURES AND THEORETICAL MODELS OF ...

    African Journals Online (AJOL)

    Preferred Customer

    structure of the flames was computed by a simulation code with three ... When all intermediate species were eluted from the Porapak column, the molecular sieve ... This compression greatly enhances the detection limit which .... reduced, to reproduce the sampling conditions, a marked reduction in the thermocouple signal.

  18. Modelling verb selection within argument structure constructions

    NARCIS (Netherlands)

    Matusevych, Yevgen; Alishahi, Afra; Backus, Albert

    2017-01-01

    This article looks into the nature of cognitive associations between verbs and argument structure constructions (ASCs). Existing research has shown that distributional and semantic factors affect speakers' choice of verbs in ASCs. A formal account of this theory has been proposed by Ellis,

  19. Predicting nucleic acid binding interfaces from structural models of proteins.

    Science.gov (United States)

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  20. Development of the tube bundle structure for fluid-structure interaction analysis model - Intermediate Report -

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kim, Jae Yong; Lee, Kang Hee; Lee, Young Ho; Kim, Hyung Kyu

    2009-07-01

    Tube bundle structures within a Boiler or heat exchanger are laid the fluid-structure, thermal-structure and fluid-thermal-structure coupled boundary condition. In these complicated boundary conditions, Fluid-structure interaction (FSI) occurs when fluid flow causes deformation of the structure. This deformation, in turn, changes the boundary conditions for the fluid flow. The structural analysis have been executed as follows. First of all, divide the fluid and structural analysis discipline, and then independently analyzed each other. However, the fluid dynamic force effect the behavior of the structure, and the vibration amplitude of the structure to fluid. FSI analysis model was separately created fluid and structure model, and then defined the fsi boundary condition, and simultaneously analyzed in one domain. The analysis results were compared with those of the experimental method for validating the analysis model. Flow-induced vibration test was executed with single rod configuration. The vibration amplitudes of a fuel rod were measured by the laser vibro-meter system in x and y-direction. The analyses results were not closely with the test data, but the trend was very similar with the test result. In fsi coupled analysis case, the turbulent model was very important with the reliability of the accuracy of the analysis model. Therefore, the analysis model will be needed to further study

  1. PROBLEMS OF MATHEMATICAL MODELING OF THE ENTERPRISES ORGANIZATIONAL STRUCTURE

    Directory of Open Access Journals (Sweden)

    N. V. Andrianov

    2006-01-01

    Full Text Available The analysis of the mathematical models which can be used at optimization of the control system of the enterprise organizational structure is presented. The new approach to the mathematical modeling of the enterprise organizational structure, based on using of temporary characteristics of the control blocks working, is formulated

  2. A sequential model for the structure of health care utilization.

    NARCIS (Netherlands)

    Herrmann, W.J.; Haarmann, A.; Baerheim, A.

    2017-01-01

    Traditional measurement models of health care utilization are not able to represent the complex structure of health care utilization. In this qualitative study, we, therefore, developed a new model to represent the health care utilization structure. In Norway and Germany, we conducted episodic

  3. Structural model of dodecameric heat-shock protein Hsp21

    DEFF Research Database (Denmark)

    Rutsdottir, Gudrun; Härmark, Johan; Weide, Yoran

    2017-01-01

    for investigating structure-function relationships of Hsp21 and understanding these sequence variations, we developed a structural model of Hsp21 based on homology modeling, cryo-EM, cross-linking mass spectrometry, NMR, and small-angle X-ray scattering. Our data suggest a dodecameric arrangement of two trimer...

  4. A Structural Equation Modeling Analysis of Influences on Juvenile Delinquency

    Science.gov (United States)

    Barrett, David E.; Katsiyannis, Antonis; Zhang, Dalun; Zhang, Dake

    2014-01-01

    This study examined influences on delinquency and recidivism using structural equation modeling. The sample comprised 199,204 individuals: 99,602 youth whose cases had been processed by the South Carolina Department of Juvenile Justice and a matched control group of 99,602 youth without juvenile records. Structural equation modeling for the…

  5. A Paper Model of DNA Structure and Replication.

    Science.gov (United States)

    Sigismondi, Linda A.

    1989-01-01

    A paper model which is designed to give students a hands-on experience during lecture and blackboard instruction on DNA structure is provided. A list of materials, paper patterns, and procedures for using the models to teach DNA structure and replication are given. (CW)

  6. Icosahedral symmetry described by an incommensurately modulated crystal structure model

    DEFF Research Database (Denmark)

    Wolny, Janusz; Lebech, Bente

    1986-01-01

    A crystal structure model of an incommensurately modulated structure is presented. Although six different reciprocal vectors are used to describe the model, all calculations are done in three dimensions making calculation of the real-space structure trivial. Using this model, it is shown that both...... the positions of the bragg reflections and information about the relative intensities of these reflections are in full accordance with the diffraction patterns reported for microcrystals of the rapidly quenched Al86Mn14 alloy. It is also shown that at least the local structure possesses full icosahedral...

  7. Modeling of the atomic and electronic structures of interfaces

    International Nuclear Information System (INIS)

    Sutton, A.P.

    1988-01-01

    Recent tight binding and Car-Parrinello simulations of grain boundaries in semiconductors are reviewed. A critique is given of some models of embrittlement that are based on electronic structure considerations. The structural unit model of grain boundary structure is critically assessed using some results for mixed tilt and twist grain boundaries. A new method of characterizing interfacial structure in terms of bond angle distribution functions is described. A new formulation of thermodynamic properties of interfaces is presented which focusses on the local atomic environment. Effective, temperature dependent N-body atomic interactions are derived for studying grain boundary structure at elevated temperature

  8. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  9. Mathematical modeling and optimization of complex structures

    CERN Document Server

    Repin, Sergey; Tuovinen, Tero

    2016-01-01

    This volume contains selected papers in three closely related areas: mathematical modeling in mechanics, numerical analysis, and optimization methods. The papers are based upon talks presented  on the International Conference for Mathematical Modeling and Optimization in Mechanics, held in Jyväskylä, Finland, March 6-7, 2014 dedicated to Prof. N. Banichuk on the occasion of his 70th birthday. The articles are written by well-known scientists working in computational mechanics and in optimization of complicated technical models. Also, the volume contains papers discussing the historical development, the state of the art, new ideas, and open problems arising in  modern continuum mechanics and applied optimization problems. Several papers are concerned with mathematical problems in numerical analysis, which are also closely related to important mechanical models. The main topics treated include:  * Computer simulation methods in mechanics, physics, and biology;  * Variational problems and methods; minimiz...

  10. Structured Mathematical Modeling of Industrial Boiler

    Directory of Open Access Journals (Sweden)

    Abdullah Nur Aziz

    2014-04-01

    Full Text Available As a major utility system in industry, boilers consume a large portion of the total energy and costs. Significant reduction of boiler cost operation can be gained through improvements in efficiency. In accomplishing such a goal, an adequate dynamic model that comprehensively reflects boiler characteristics is required. This paper outlines the idea of developing a mathematical model of a water-tube industrial boiler based on first principles guided by the bond graph method in its derivation. The model describes the temperature dynamics of the boiler subsystems such as economizer, steam drum, desuperheater, and superheater. The mathematical model was examined using industrial boiler performance test data.It can be used to build a boiler simulator or help operators run a boiler effectively.

  11. Structured Mathematical Modeling of Industrial Boiler

    OpenAIRE

    Aziz, Abdullah Nur; Nazaruddin, Yul Yunazwin; Siregar, Parsaulian; Bindar, Yazid

    2014-01-01

    As a major utility system in industry, boilers consume a large portion of the total energy and costs. Significant reduction of boiler cost operation can be gained through improvements in efficiency. In accomplishing such a goal, an adequate dynamic model that comprehensively reflects boiler characteristics is required. This paper outlines the idea of developing a mathematical model of a water-tube industrial boiler based on first principles guided by the bond graph method in its derivation. T...

  12. Discretization model for nonlinear dynamic analysis of three dimensional structures

    International Nuclear Information System (INIS)

    Hayashi, Y.

    1982-12-01

    A discretization model for nonlinear dynamic analysis of three dimensional structures is presented. The discretization is achieved through a three dimensional spring-mass system and the dynamic response obtained by direct integration of the equations of motion using central diferences. First the viability of the model is verified through the analysis of homogeneous linear structures and then its performance in the analysis of structures subjected to impulsive or impact loads, taking into account both geometrical and physical nonlinearities is evaluated. (Author) [pt

  13. MODELING OF OPTIMUM COMPANY MANAGEMENT STRUCTURE

    Directory of Open Access Journals (Sweden)

    E. V. Shchemeleva

    2007-01-01

    Full Text Available The paper considers one of directions concerning the solution of the problem for optimization of the managerial staff of entrepreneurial structures on the basis of a matrix method of flow theory. The set of tools of the method allows to reduce the number of managers, senior staff and employees of an organization by means of redistribution of time required for performing specific managerial and administrative functions within structural divisions. In this regard an important point is preservation of total duration of an administrative cycle.The effect of the optimization is a reasonable reduction of organization’s funds on labor payment, which is of current importance for the present situation in the Republic of Belarus. Besides, the solution of the specified problem contributes to receiving indirect economic benefits.The suggested method was examined by the author on a concrete example. 

  14. Molecular Models of Genetic and Organismic Structures

    CERN Document Server

    Baianu, I C

    2004-01-01

    In recent studies we showed that the earlier relational theories of organismic sets (Rashevsky,1967), Metabolic-Replication (M,R)-systems (Rosen,1958)and molecular sets (Bartholomay,1968) share a joint foundation that can be studied within a unified categorical framework of functional organismic structures (Baianu,1980. This is possible because all relational theories have a biomolecular basis, that is, complex structures such as genomes, cells,organs and biological organisms are mathematically represented in terms of biomolecular properties and entities,(that are often implicit in their representation axioms. The definition of organismic sets, for example, requires that certain essential quantities be determined from experiment: these are specified by special sets of values of general observables that are derived from physicochemical measurements(Baianu,1970; Baianu,1980; Baianu et al, 2004a.)Such observables are context-dependent and lead directly to natural transformations in categories and Topoi, that are...

  15. Exploratory Topology Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Holden Deleuran, Anders; Pauly, Mark; Tamke, Martin

    2016-01-01

    The development of novel form-active hybrid structures (FAHS) is impeded by a lack of modelling tools that allow for exploratory topology modelling of shaped assemblies. We present a flexible and real-time computational design modelling pipeline developed for the exploratory modelling of FAHS...... that enables designers and engineers to iteratively construct and manipulate form-active hybrid assembly topology on the fly. The pipeline implements Kangaroo2's projection-based methods for modelling hybrid structures consisting of slender beams and cable networks. A selection of design modelling sketches...

  16. Model Reduction in Dynamic Finite Element Analysis of Lightweight Structures

    DEFF Research Database (Denmark)

    Flodén, Ola; Persson, Kent; Sjöström, Anders

    2012-01-01

    models may be created by assembling models of floor and wall structures into large models of complete buildings. When assembling the floor and wall models, the number of degrees of freedom quickly increases to exceed the limits of computer capacity, at least in a reasonable amount of computational time...... Hz. Three different methods of model reduction were investigated; Guyan reduction, component mode synthesis and a third approach where a new finite element model was created with structural elements. Eigenvalue and steady-state analyses were performed in order to compare the errors...

  17. Constitutive Models for Design of Sustainable Concrete Structures

    Science.gov (United States)

    Brozovsky, J.; Cajka, R.; Koktan, J.

    2018-04-01

    The paper deals with numerical models of reinforced concrete which are expected to be useful to enhance design of sustainable reinforced concrete structures. That is, the models which can deliver higher precision of results than the linear elastic models but which are still feasible for engineering practice. Such models can be based on an elastic-plastic material. The paper discusses properties of such models. A material model based of the Chen criteria and the Ohtani hardening model for concrete was selected for further development. There is also given a comparison of behaviour of such model with behaviour of a more complex smeared crack model which is based on principles of fracture mechanics.

  18. Algebraic fermion models and nuclear structure physics

    International Nuclear Information System (INIS)

    Troltenier, Dirk; Blokhin, Andrey; Draayer, Jerry P.; Rompf, Dirk; Hirsch, Jorge G.

    1996-01-01

    Recent experimental and theoretical developments are generating renewed interest in the nuclear SU(3) shell model, and this extends to the symplectic model, with its Sp(6,R) symmetry, which is a natural multi-(ℎ/2π)ω extension of the SU(3) theory. First and foremost, an understanding of how the dynamics of a quantum rotor is embedded in the shell model has established it as the model of choice for describing strongly deformed systems. Second, the symplectic model extension of the 0-(ℎ/2π)ω theory can be used to probe additional degrees of freedom, like core polarization and vorticity modes that play a key role in providing a full description of quadrupole collectivity. Third, the discovery and understanding of pseudo-spin has allowed for an extension of the theory from light (A≤40) to heavy (A≥100) nuclei. Fourth, a user-friendly computer code for calculating reduced matrix elements of operators that couple SU(3) representations is now available. And finally, since the theory is designed to cope with deformation in a natural way, microscopic features of deformed systems can be probed; for example, the theory is now being employed to study double beta decay and thereby serves to probe the validity of the standard model of particles and their interactions. A subset of these topics will be considered in this course--examples cited include: a consideration of the origin of pseudo-spin symmetry; a SU(3)-based interpretation of the coupled-rotor model, early results of double beta decay studies; and some recent developments on the pseudo-SU(3) theory. Nothing will be said about other fermion-based theories; students are referred to reviews in the literature for reports on developments in these related areas

  19. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  20. Development of the tube bundle structure for fluid-structure interaction analysis model

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kim, Jae Yong

    2010-02-01

    Tube bundle structures within a Boiler or heat exchanger are laid the fluid-structure, thermal-structure and fluid-thermal-structure coupled boundary condition. In these complicated boundary conditions, Fluid-structure interaction (FSI) occurs when fluid flow causes deformation of the structure. This deformation, in turn, changes the boundary conditions for the fluid flow. The structural analysis discipline, and then independently analyzed each other. However, the fluid dynamic force effect the behavior of the structure, and the vibration amplitude of the structure to fluid. FSI analysis model was separately created fluid and structure model, and then defined the fsi boundary condition, and simultaneously analyzed in one domain. The analysis results were compared with those of the experimental method for validating the analysis model. Flow-induced vibration test was executed with single rod configuration. The vibration amplitudes of a fuel rod were measured by the laser vibro-meter system in x and y-direction. The analyses results were not closely with the test data, but the trend was very similar with the test result. In fsi coupled analysis case, the turbulent model was very important with the reliability of the accuracy of the analysis model. Therefore, the analysis model will be needed to further study

  1. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  2. Blast Testing and Modelling of Composite Structures

    DEFF Research Database (Denmark)

    Giversen, Søren

    The motivation for this work is based on a desire for finding light weight alternatives to high strength steel as the material to use for armouring in military vehicles. With the use of high strength steel, an increase in the level of armouring has a significant impact on the vehicle weight......, affecting for example the manoeuvrability and top speed negatively, which ultimately affects the safety of the personal in the vehicle. Strong and light materials, such as fibre reinforced composites, could therefore act as substitutes for the high strength steel, and minimize the impact on the vehicle...... work this set-up should be improved such that the modelled pressure can be validated. For tests performed with a 250g charge load comparisons with model data showed poor agreement. This was found to be due to improper design of the modelled laminate panels, where the layer interface delamination...

  3. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  4. The Structure of Models of Peano Arithmetic

    CERN Document Server

    Kossak, Roman

    2006-01-01

    Aimed at graduate students and research logicians and mathematicians, this much-awaited text covers over forty years of work on relative classification theory for non-standard models of arithmetic. With graded exercises at the end of each chapter, the book covers basic isomorphism invariants: families of types realized in a model, lattices of elementary substructures and automorphism groups. Many results involve applications of the powerful technique of minimal types due to HaimGaifman, and some of the results are classical but have never been published in a book form before.

  5. Track structure model of cell damage in space flight

    Science.gov (United States)

    Katz, Robert; Cucinotta, Francis A.; Wilson, John W.; Shinn, Judy L.; Ngo, Duc M.

    1992-01-01

    The phenomenological track-structure model of cell damage is discussed. A description of the application of the track-structure model with the NASA Langley transport code for laboratory and space radiation is given. Comparisons to experimental results for cell survival during exposure to monoenergetic, heavy-ion beams are made. The model is also applied to predict cell damage rates and relative biological effectiveness for deep-space exposures.

  6. Acoustic Modeling of Lightweight Structures: A Literature Review

    Science.gov (United States)

    Yang, Shasha; Shen, Cheng

    2017-10-01

    This paper gives an overview of acoustic modeling for three kinds of typical lightweight structures including double-leaf plate system, stiffened single (or double) plate and porous material. Classical models are citied to provide frame work of theoretical modeling for acoustic property of lightweight structures; important research advances derived by our research group and other authors are introduced to describe the current state of art for acoustic research. Finally, remaining problems and future research directions are concluded and prospected briefly

  7. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    Science.gov (United States)

    2017-09-19

    NUWC-NPT Technical Report 12,236 19 September 2017 Structural Acoustic Physics -Based Modeling of Curved Composite Shells Rachel E. Hesse...SUBTITLE Structural Acoustic Physics -Based Modeling of Curved Composite Shells 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...study was to use physics -based modeling (PBM) to investigate wave propagations through curved shells that are subjected to acoustic excitation. An

  8. Modeling Equity for Alternative Water Rate Structures

    Science.gov (United States)

    Griffin, R.; Mjelde, J.

    2011-12-01

    The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to

  9. Modeling 3-D solar wind structure

    Czech Academy of Sciences Publication Activity Database

    Odstrčil, Dušan

    2003-01-01

    Roč. 32, č. 4 (2003), s. 497-506 ISSN 0273-1177 R&D Projects: GA AV ČR IAA3003003; GA AV ČR IBS1003006 Institutional research plan: CEZ:AV0Z1003909 Keywords : solar wind * modeling Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.483, year: 2003

  10. Hadron structure in the ladder model

    International Nuclear Information System (INIS)

    Soper, D.E.

    1979-01-01

    The (flavor non-singlet) Green's function to find a far-off-shell quark in a hadron is obtained in the renormalization group improved ladder model for QCD in the space-like axial gauge. Particular attention is paid to the role of the singularity in the gluon propagator. 4 figures

  11. Structured Statistical Models of Inductive Reasoning

    Science.gov (United States)

    Kemp, Charles; Tenenbaum, Joshua B.

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet…

  12. Testing structural stability in macroeconometric models

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Hashimzade, N.; Thornton, M.A.

    2013-01-01

    Since the earliest days of macroeconometric analysis, researchers have been concerned about the appropriateness of the assumption that model parameters remain constant over long periods of time; for example see Tinbergen (1939). This concern is also central to the so-called Lucas (1976) critique

  13. Meteorite Unit Models for Structural Properties

    Science.gov (United States)

    Agrawal, Parul; Carlozzi, Alexander A.; Karajeh, Zaid S.; Bryson, Kathryn L.

    2017-10-01

    To assess the threat posed by an asteroid entering Earth’s atmosphere, one must predict if, when, and how it fragments during entry. A comprehensive understanding of the asteroid material properties is needed to achieve this objective. At present, the meteorite material found on earth are the only objects from an entering asteroid that can be used as representative material and be tested inside a laboratory. Due to complex composition, it is challenging and expensive to obtain reliable material properties by means of laboratory test for a family of meteorites. In order to circumvent this challenge, meteorite unit models are developed to determine the effective material properties including Young’s modulus, compressive and tensile strengths and Poisson’s ratio, that in turn would help deduce the properties of asteroids. The meteorite unit model is a representative volume that accounts for diverse minerals, porosity, cracks and matrix composition.The Young’s Modulus and Poisson’s Ratio in the meteorite units are calculated by performing several hundreds of Monte Carlo simulations by randomly distributing the various phases inside these units. Once these values are obtained, cracks are introduced in these units. The size, orientation and distribution of cracks are derived by CT-scans and visual scans of various meteorites. Subsequently, simulations are performed to attain stress-strain relations, strength and effective modulus values in the presence of these cracks. The meteorite unit models are presented for H, L and LL ordinary chondrites, as well as for terrestrial basalt. In the case of the latter, data from the simulations is compared with experimental data to validate the methodology. These meteorite unit models will be subsequently used in fragmentation modeling of full scale asteroids.

  14. Origin and spread of the 1278insTATC mutation causing Tay-Sachs disease in Ashkenazi Jews: genetic drift as a robust and parsimonious hypothesis.

    Science.gov (United States)

    Frisch, Amos; Colombo, Roberto; Michaelovsky, Elena; Karpati, Mazal; Goldman, Boleslaw; Peleg, Leah

    2004-03-01

    The 1278insTATC is the most prevalent beta-hexosaminidase A ( HEXA) gene mutation causing Tay-Sachs disease (TSD), one of the four lysosomal storage diseases (LSDs) occurring at elevated frequencies among Ashkenazi Jews (AJs). To investigate the genetic history of this mutation in the AJ population, a conserved haplotype (D15S981:175-D15S131:240-D15S1050:284-D15S197:144-D15S188:418) was identified in 1278insTATC chromosomes from 55 unrelated AJ individuals (15 homozygotes and 40 heterozygotes for the TSD mutation), suggesting the occurrence of a common founder. When two methods were used for analysis of linkage disequilibrium (LD) between flanking polymorphic markers and the disease locus and for the study of the decay of LD over time, the estimated age of the insertion was found to be 40+/-12 generations (95% confidence interval: 30-50 generations), so that the most recent common ancestor of the mutation-bearing chromosomes would date to the 8th-9th century. This corresponds with the demographic expansion of AJs in central Europe, following the founding of the Ashkenaz settlement in the early Middle Ages. The results are consistent with the geographic distribution of the main TSD mutation, 1278insTATC being more common in central Europe, and with the coalescent times of mutations causing two other LSDs, Gaucher disease and mucolipidosis type IV. Evidence for the absence of a determinant positive selection (heterozygote advantage) over the mutation is provided by a comparison between the estimated age of 1278insTATC and the probability of the current AJ frequency of the mutant allele as a function of its age, calculated by use of a branching-process model. Therefore, the founder effect in a rapidly expanding population arising from a bottleneck provides a robust parsimonious hypothesis explaining the spread of 1278insTATC-linked TSD in AJ individuals.

  15. Modelling interstellar structures around Vela X-1

    Science.gov (United States)

    Gvaramadze, V. V.; Alexashov, D. B.; Katushkina, O. A.; Kniazev, A. Y.

    2018-03-01

    We report the discovery of filamentary structures stretched behind the bow-shock-producing high-mass X-ray binary Vela X-1 using the SuperCOSMOS H-alpha Survey and present the results of optical spectroscopy of the bow shock carried out with the Southern African Large Telescope. The geometry of the detected structures suggests that Vela X-1 has encountered a wedge-like layer of enhanced density on its way and that the shocked material of the layer partially outlines a wake downstream of Vela X-1. To substantiate this suggestion, we carried out 3D magnetohydrodynamic simulations of interaction between Vela X-1 and the layer for three limiting cases. Namely, we run simulations in which (i) the stellar wind and the interstellar medium (ISM) were treated as pure hydrodynamic flows, (ii) a homogeneous magnetic field was added to the ISM, while the stellar wind was assumed to be unmagnetized, and (iii) the stellar wind was assumed to possess a helical magnetic field, while there was no magnetic field in the ISM. We found that although the first two simulations can provide a rough agreement with the observations, only the third one allowed us to reproduce not only the wake behind Vela X-1, but also the general geometry of the bow shock ahead of it.

  16. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  17. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  18. MEMO Organisation Modelling Language (1): Focus on organisational structure

    OpenAIRE

    Frank, Ulrich

    2011-01-01

    Organisation models are at the core of enterprise model, since they represent key aspects of a company's action system. Within MEMO, the Organisation Modelling Language (OrgML) supports the construction of organisation models. They can be divided into two main abstractions: a static abstraction is focusing on the structure of an organisation that reflects the division of labour with respect to static responsibilities and a dynamic abstraction that is focusing on models of business processes. ...

  19. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  20. Testing the Structure of Hydrological Models using Genetic Programming

    Science.gov (United States)

    Selle, B.; Muttil, N.

    2009-04-01

    Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  1. Nonlinear structural mechanics theory, dynamical phenomena and modeling

    CERN Document Server

    Lacarbonara, Walter

    2013-01-01

    Nonlinear Structural Mechanics: Theory, Dynamical Phenomena and Modeling offers a concise, coherent presentation of the theoretical framework of nonlinear structural mechanics, computational methods, applications, parametric investigations of nonlinear phenomena and their mechanical interpretation towards design. The theoretical and computational tools that enable the formulation, solution, and interpretation of nonlinear structures are presented in a systematic fashion so as to gradually attain an increasing level of complexity of structural behaviors, under the prevailing assumptions on the geometry of deformation, the constitutive aspects and the loading scenarios. Readers will find a treatment of the foundations of nonlinear structural mechanics towards advanced reduced models, unified with modern computational tools in the framework of the prominent nonlinear structural dynamic phenomena while tackling both the mathematical and applied sciences. Nonlinear Structural Mechanics: Theory, Dynamical Phenomena...

  2. Modelling charge storage in Euclid CCD structures

    International Nuclear Information System (INIS)

    Clarke, A S; Holland, A; Hall, D J; Burt, D

    2012-01-01

    The primary aim of ESA's proposed Euclid mission is to observe the distribution of galaxies and galaxy clusters, enabling the mapping of the dark architecture of the universe [1]. This requires a high performance detector, designed to endure a harsh radiation environment. The e2v CCD204 image sensor was redesigned for use on the Euclid mission [2]. The resulting e2v CCD273 has a narrower serial register electrode and transfer channel compared to its predecessor, causing a reduction in the size of charge packets stored, thus reducing the number of traps encountered by the signal electrons during charge transfer and improving the serial Charge Transfer Efficiency (CTE) under irradiation [3]. The proposed Euclid CCD has been modelled using the Silvaco TCAD software [4], to test preliminary calculations for the Full Well Capacity (FWC) and the channel potential of the device and provide indications of the volume occupied by varying signals. These results are essential for the realisation of the mission objectives and for radiation damage studies, with the aim of producing empirically derived formulae to approximate signal-volume characteristics in the devices. These formulae will be used in the radiation damage (charge trapping) models. The Silvaco simulations have been tested against real devices to compare the experimental measurements to those predicted in the models. Using these results, the implications of this study on the Euclid mission can be investigated in more detail.

  3. A structural model of technology acceptance

    Directory of Open Access Journals (Sweden)

    Etienne Erasmus

    2015-04-01

    Research purpose: The aim of this study was to test the technology acceptance model within a South African SAP® Enterprise Resource Planning user environment. Motivation for the study: No study could be traced in which the technology acceptance model has been evaluated in the South African context. Research approach, design and method: A cross-sectional survey design was used. The 23-item Technology Acceptance Model Questionnaire was deployed amongst SAP® Enterprise Resource Planning users (N = 241. Main findings: The results confirmed significant paths from perceived usefulness of the information system to attitudes towards and behavioural intentions to use it. Furthermore, behavioural intention to use the system predicted actual use thereof. Perceived ease of use indirectly affected attitudes towards and behavioural intentions to use via perceived usefulness of the information system. Practical/managerial implications: Practitioners should build user confidence by ensuring the ease of use of a new system, providing relevant education, training and guidance and reiterating its usefulness and future added value to the user’s job and career. Contribution/value-add: This study contributes to scientific knowledge regarding the influence of individuals’ perceptions of information system usage on their attitudes, behavioural intentions and actual use of such a system.

  4. Constructive modelling of structural turbulence: computational experiment

    Energy Technology Data Exchange (ETDEWEB)

    Belotserkovskii, O M; Oparin, A M; Troshkin, O V [Institute for Computer Aided Design, Russian Academy of Sciences, Vtoraya Brestskaya st., 19/18, Moscow, 123056 (Russian Federation); Chechetkin, V M [Keldysh Institute for Applied Mathematics, Russian Academy of Sciences, Miusskaya sq., 4, Moscow, 125047 (Russian Federation)], E-mail: o.bel@icad.org.ru, E-mail: a.oparin@icad.org.ru, E-mail: troshkin@icad.org.ru, E-mail: chech@gin@keldysh.ru

    2008-12-15

    Constructively, the analysis of the phenomenon of turbulence must and can be performed through direct numerical simulations of mechanics supposed to be inherent to secondary flows. This one reveals itself through such instances as large vortices, structural instabilities, vortex cascades and principal modes discussed in this paper. Like fragments of a puzzle, they speak of a motion ordered with its own nuts and bolts, however chaotic it appears at first sight. This opens an opportunity for a multi-oriented approach of which a prime ideology seems to be a rational combination of grid, spectral and statistical methods. An attempt is made to bring together the above instances and produce an alternative point of view on the phenomenon in question when based on the main laws of conservation.

  5. A parsimonious characterization of change in global age-specific and total fertility rates

    Science.gov (United States)

    2018-01-01

    This study aims to understand trends in global fertility from 1950-2010 though the analysis of age-specific fertility rates. This approach incorporates both the overall level, as when the total fertility rate is modeled, and different patterns of age-specific fertility to examine the relationship between changes in age-specific fertility and fertility decline. Singular value decomposition is used to capture the variation in age-specific fertility curves while reducing the number of dimensions, allowing curves to be described nearly fully with three parameters. Regional patterns and trends over time are evident in parameter values, suggesting this method provides a useful tool for considering fertility decline globally. The second and third parameters were analyzed using model-based clustering to examine patterns of age-specific fertility over time and place; four clusters were obtained. A country’s demographic transition can be traced through time by membership in the different clusters, and regional patterns in the trajectories through time and with fertility decline are identified. PMID:29377899

  6. A Structural Equation Approach to Models with Spatial Dependence

    NARCIS (Netherlands)

    Oud, Johan H. L.; Folmer, Henk

    We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it

  7. A structural equation approach to models with spatial dependence

    NARCIS (Netherlands)

    Oud, J.H.L.; Folmer, H.

    2008-01-01

    We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it

  8. A Structural Equation Approach to Models with Spatial Dependence

    NARCIS (Netherlands)

    Oud, J.H.L.; Folmer, H.

    2008-01-01

    We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it

  9. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling was performed by using I-TASSER server. The final model obtained was accessed by PROCHECK and DFIRE2, which confirmed that the final model is reliable. Until complete biochemical ...

  10. Simple models of the thermal structure of the Venusian ionosphere

    International Nuclear Information System (INIS)

    Whitten, R.C.; Knudsen, W.C.

    1980-01-01

    Analytical and numerical models of plasma temperatures in the Venusian ionosphere are proposed. The magnitudes of plasma thermal parameters are calculated using thermal-structure data obtained by the Pioneer Venus Orbiter. The simple models are found to be in good agreement with the more detailed models of thermal balance. Daytime and nighttime temperature data along with corresponding temperature profiles are provided

  11. Strained spiral vortex model for turbulent fine structure

    Science.gov (United States)

    Lundgren, T. S.

    1982-01-01

    A model for the intermittent fine structure of high Reynolds number turbulence is proposed. The model consists of slender axially strained spiral vortex solutions of the Navier-Stokes equation. The tightening of the spiral turns by the differential rotation of the induced swirling velocity produces a cascade of velocity fluctuations to smaller scale. The Kolmogorov energy spectrum is a result of this model.

  12. A generative, probabilistic model of local protein structure

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Mardia, Kanti V.; Taylor, Charles C.

    2008-01-01

    Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative...... conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state...

  13. Calibrated and Interactive Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel

    2016-01-01

    Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....

  14. Deep inelastic structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.)

  15. Deep inelastic structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V. (Valencia Univ. (Spain). Dept. de Didactica de las Ciencias Experimentales); Vento, V. (Valencia Univ. (Spain). Dept. de Fisica Teorica; Centro Mixto CSIC/Valencia Univ., Valencia (Spain). Inst. de Fisica Corpuscular)

    1989-10-02

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.).

  16. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  17. Community gardening: a parsimonious path to individual, community, and environmental resilience.

    Science.gov (United States)

    Okvat, Heather A; Zautra, Alex J

    2011-06-01

    The goal of this paper is to introduce community gardening as a promising method of furthering well-being and resilience on multiple levels: individual, social group, and natural environment. We examine empirical evidence for the benefits of gardening, and we advocate the development and testing of social ecological models of community resilience through examination of the impact of community gardens, especially in urban areas. The definition of community is extended beyond human social ties to include connections with other species and the earth itself, what Berry (1988) has called an Earth community. We discuss the potential contribution of an extensive network of community gardens to easing the global climate change crisis and address the role of community psychologists in community gardening research and policy-oriented action.

  18. Development and modeling of self-deployable structures

    Science.gov (United States)

    Neogi, Depankar

    Deployable space structures are prefabricated structures which can be transformed from a closed, compact configuration to a predetermined expanded form in which they are stable and can bear loads. The present research effort investigates a new family of deployable structures, called the Self-Deployable Structures (SDS). Unlike other deployable structures, which have rigid members, the SDS members are flexible while the connecting joints are rigid. The joints store the predefined geometry of the deployed structure in the collapsed state. The SDS is stress-free in both deployed and collapsed configurations and results in a self-standing structure which acquires its structural properties after a chemical reaction. Reliability of deployment is one of the most important features of the SDS, since it does not rely on mechanisms that can lock during deployment. The unit building block of these structures is the self-deployable structural element (SDSE). Several SDSE members can be linked to generate a complex building block such as a triangular or a tetrahedral structure. Different SDSE and SDS concepts are investigated in the research work, and the performance of SDS's are experimentally and theoretically explored. Triangular and tetrahedral prototype SDS have been developed and presented. Theoretical efforts include modeling the behavior of 2-dimensional SDSs. Using this design tool, engineers can study the effects of different packing configurations and deployment sequence; and perform optimization on the collapsed state of a structure with different external constraints. The model also predicts if any lockup or entanglement occurs during deployment.

  19. Flavor structure of E6 GUT models

    International Nuclear Information System (INIS)

    Kawase, Hidetoshi; Maekawa, Nobuhiro

    2010-01-01

    In E 6 grand unified theory with SU(2) H family symmetry, the spontaneous CP violation can solve the supersymmetric CP problem. The scenario predicts V ub -O(λ 4 ) instead of O(λ 3 ), which is the naively expected value, because of cancellation at the leading order. Since the experimental value of V ub is O(λ 4 ), it is important to consider the reason and the conditions for the cancellation. In this paper, we provide a simple reason for the cancellation and show that in some E 6 models, such cancellation requires that the vacuum expectation value (VEV) of the adjoint Higgs does not break U(1) B-L . Note that the direction of the VEV plays an important role in solving the doublet-triplet splitting problem by the Dimopoulos-Wilczek mechanism. In this E 6 model, the direction of the adjoint Higgs VEV can be measured experimentally by measuring the size of V ub -O(λ 4 ). (author)

  20. Three Dimensional Response Spectrum Soil Structure Modeling Versus Conceptual Understanding To Illustrate Seismic Response Of Structures

    International Nuclear Information System (INIS)

    Touqan, Abdul Razzaq

    2008-01-01

    Present methods of analysis and mathematical modeling contain so many assumptions that separate them from reality and thus represent a defect in design which makes it difficult to analyze reasons of failure. Three dimensional (3D) modeling is so superior to 1D or 2D modeling, static analysis deviates from the true nature of earthquake load which is ''a dynamic punch'', and conflicting assumptions exist between structural engineers (who assume flexible structures on rigid block foundations) and geotechnical engineers (who assume flexible foundations supporting rigid structures). Thus a 3D dynamic soil-structure interaction is a step that removes many of the assumptions and thus clears reality to a greater extent. However such a model cannot be analytically analyzed. We need to anatomize and analogize it. The paper will represent a conceptual (analogical) 1D model for soil structure interaction and clarifies it by comparing its outcome with 3D dynamic soil-structure finite element analysis of two structures. The aim is to focus on how to calculate the period of the structure and to investigate effect of variation of stiffness on soil-structure interaction

  1. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  2. A resource for benchmarking the usefulness of protein structure models.

    Science.gov (United States)

    Carbajo, Daniel; Tramontano, Anna

    2012-08-02

    Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by non-academics: No.

  3. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel; Tramontano, Anna

    2012-01-01

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  4. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  5. Generalized Swept Mid-structure for Polygonal Models

    KAUST Repository

    Martin, Tobias; Chen, Guoning; Musuvathy, Suraj; Cohen, Elaine; Hansen, Charles

    2012-01-01

    We introduce a novel mid-structure called the generalized swept mid-structure (GSM) of a closed polygonal shape, and a framework to compute it. The GSM contains both curve and surface elements and has consistent sheet-by-sheet topology, versus triangle-by-triangle topology produced by other mid-structure methods. To obtain this structure, a harmonic function, defined on the volume that is enclosed by the surface, is used to decompose the volume into a set of slices. A technique for computing the 1D mid-structures of these slices is introduced. The mid-structures of adjacent slices are then iteratively matched through a boundary similarity computation and triangulated to form the GSM. This structure respects the topology of the input surface model is a hybrid mid-structure representation. The construction and topology of the GSM allows for local and global simplification, used in further applications such as parameterization, volumetric mesh generation and medical applications.

  6. Generalized Swept Mid-structure for Polygonal Models

    KAUST Repository

    Martin, Tobias

    2012-05-01

    We introduce a novel mid-structure called the generalized swept mid-structure (GSM) of a closed polygonal shape, and a framework to compute it. The GSM contains both curve and surface elements and has consistent sheet-by-sheet topology, versus triangle-by-triangle topology produced by other mid-structure methods. To obtain this structure, a harmonic function, defined on the volume that is enclosed by the surface, is used to decompose the volume into a set of slices. A technique for computing the 1D mid-structures of these slices is introduced. The mid-structures of adjacent slices are then iteratively matched through a boundary similarity computation and triangulated to form the GSM. This structure respects the topology of the input surface model is a hybrid mid-structure representation. The construction and topology of the GSM allows for local and global simplification, used in further applications such as parameterization, volumetric mesh generation and medical applications.

  7. Testing the structure of a hydrological model using Genetic Programming

    Science.gov (United States)

    Selle, Benny; Muttil, Nitin

    2011-01-01

    SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  8. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  9. Structural and Molecular Modeling Features of P2X Receptors

    Directory of Open Access Journals (Sweden)

    Luiz Anastacio Alves

    2014-03-01

    Full Text Available Currently, adenosine 5'-triphosphate (ATP is recognized as the extracellular messenger that acts through P2 receptors. P2 receptors are divided into two subtypes: P2Y metabotropic receptors and P2X ionotropic receptors, both of which are found in virtually all mammalian cell types studied. Due to the difficulty in studying membrane protein structures by X-ray crystallography or NMR techniques, there is little information about these structures available in the literature. Two structures of the P2X4 receptor in truncated form have been solved by crystallography. Molecular modeling has proven to be an excellent tool for studying ionotropic receptors. Recently, modeling studies carried out on P2X receptors have advanced our knowledge of the P2X receptor structure-function relationships. This review presents a brief history of ion channel structural studies and shows how modeling approaches can be used to address relevant questions about P2X receptors.

  10. LYRA, a webserver for lymphocyte receptor structural modeling

    DEFF Research Database (Denmark)

    Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter

    2015-01-01

    the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA......The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements...... a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts...

  11. Generalized Extreme Value model with Cyclic Covariate Structure ...

    Indian Academy of Sciences (India)

    48

    enhances the estimation of the return period; however, its application is ...... Cohn T A and Lins H F 2005 Nature's style: Naturally trendy; GEOPHYSICAL ..... Final non-stationary GEV models with covariate structures shortlisted based on.

  12. Modeling of Triangular Lattice Space Structures with Curved Battens

    Science.gov (United States)

    Chen, Tzikang; Wang, John T.

    2005-01-01

    Techniques for simulating an assembly process of lattice structures with curved battens were developed. The shape of the curved battens, the tension in the diagonals, and the compression in the battens were predicted for the assembled model. To be able to perform the assembly simulation, a cable-pulley element was implemented, and geometrically nonlinear finite element analyses were performed. Three types of finite element models were created from assembled lattice structures for studying the effects of design and modeling variations on the load carrying capability. Discrepancies in the predictions from these models were discussed. The effects of diagonal constraint failure were also studied.

  13. Modeling Temporal Evolution and Multiscale Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2013-01-01

    Many real-world networks exhibit both temporal evolution and multiscale structure. We propose a model for temporally correlated multifurcating hierarchies in complex networks which jointly capture both effects. We use the Gibbs fragmentation tree as prior over multifurcating trees and a change......-point model to account for the temporal evolution of each vertex. We demonstrate that our model is able to infer time-varying multiscale structure in synthetic as well as three real world time-evolving complex networks. Our modeling of the temporal evolution of hierarchies brings new insights...

  14. Continuous-time model of structural balance.

    Science.gov (United States)

    Marvel, Seth A; Kleinberg, Jon; Kleinberg, Robert D; Strogatz, Steven H

    2011-02-01

    It is not uncommon for certain social networks to divide into two opposing camps in response to stress. This happens, for example, in networks of political parties during winner-takes-all elections, in networks of companies competing to establish technical standards, and in networks of nations faced with mounting threats of war. A simple model for these two-sided separations is the dynamical system dX/dt = X(2), where X is a matrix of the friendliness or unfriendliness between pairs of nodes in the network. Previous simulations suggested that only two types of behavior were possible for this system: Either all relationships become friendly or two hostile factions emerge. Here we prove that for generic initial conditions, these are indeed the only possible outcomes. Our analysis yields a closed-form expression for faction membership as a function of the initial conditions and implies that the initial amount of friendliness in large social networks (started from random initial conditions) determines whether they will end up in intractable conflict or global harmony.

  15. Functional Coverage of the Human Genome by Existing Structures, Structural Genomics Targets, and Homology Models.

    Directory of Open Access Journals (Sweden)

    2005-08-01

    Full Text Available The bias in protein structure and function space resulting from experimental limitations and targeting of particular functional classes of proteins by structural biologists has long been recognized, but never continuously quantified. Using the Enzyme Commission and the Gene Ontology classifications as a reference frame, and integrating structure data from the Protein Data Bank (PDB, target sequences from the structural genomics projects, structure homology derived from the SUPERFAMILY database, and genome annotations from Ensembl and NCBI, we provide a quantified view, both at the domain and whole-protein levels, of the current and projected coverage of protein structure and function space relative to the human genome. Protein structures currently provide at least one domain that covers 37% of the functional classes identified in the genome; whole structure coverage exists for 25% of the genome. If all the structural genomics targets were solved (twice the current number of structures in the PDB, it is estimated that structures of one domain would cover 69% of the functional classes identified and complete structure coverage would be 44%. Homology models from existing experimental structures extend the 37% coverage to 56% of the genome as single domains and 25% to 31% for complete structures. Coverage from homology models is not evenly distributed by protein family, reflecting differing degrees of sequence and structure divergence within families. While these data provide coverage, conversely, they also systematically highlight functional classes of proteins for which structures should be determined. Current key functional families without structure representation are highlighted here; updated information on the "most wanted list" that should be solved is available on a weekly basis from http://function.rcsb.org:8080/pdb/function_distribution/index.html.

  16. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  17. Modeling Bistable Composite Laminates for Piezoelectric Morphing Structures

    OpenAIRE

    Darryl V. Murray; Oliver J. Myers

    2013-01-01

    A sequential modeling effort for bistable composite laminates for piezoelectric morphing structures is presented. Thin unsymmetric carbon fiber composite laminates are examined for use of morphing structures using piezoelectric actuation. When cooling from the elevated cure temperature to room temperature, these unsymmetric composite laminates will deform. These postcure room temperature deformation shapes can be used as morphing structures. Applying a force to these deformed laminates will c...

  18. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  19. Lattice Modeling of Early-Age Behavior of Structural Concrete

    OpenAIRE

    Pan, Yaming; Prado, Armando; Porras, Roc?o; Hafez, Omar M.; Bolander, John E.

    2017-01-01

    The susceptibility of structural concrete to early-age cracking depends on material composition, methods of processing, structural boundary conditions, and a variety of environmental factors. Computational modeling offers a means for identifying primary factors and strategies for reducing cracking potential. Herein, lattice models are shown to be adept at simulating the thermal-hygral-mechanical phenomena that influence early-age cracking. In particular, this paper presents a lattice-based ap...

  20. CREATING EFFECTIVE MODELS OF VERTICAL INTEGRATED STRUCTURES IN UKRAINE

    Directory of Open Access Journals (Sweden)

    D. V. Koliesnikov

    2011-01-01

    Full Text Available The results of scientific research aimed at development of methodology-theoretical mechanisms of building the effective models of vertically-integrated structures are presented. A presence of vertically-integrated structures on natural-monopolistic markets at private and governmental sectors of economy and priority directions of integration are given.

  1. Diquark structure in heavy quark baryons in a geometric model

    International Nuclear Information System (INIS)

    Paria, Lina; Abbas, Afsar

    1996-01-01

    Using a geometric model to study the structure of hadrons, baryons having one, two and three heavy quarks have been studied here. The study reveals diquark structure in baryons with one and two heavy quarks but not with three heavy identical quarks. (author). 15 refs., 2 figs., 2 tabs

  2. Lower bound plane stress element for modelling 3D structures

    DEFF Research Database (Denmark)

    Herfelt, Morten Andersen; Poulsen, Peter Noe; Hoang, Linh Cao

    2017-01-01

    In-plane action is often the primary load-carrying mechanism of reinforced concrete structures. The plate bending action will be secondary, and the behaviour of the structure can be modelled with a reasonable accuracy using a generalised three-dimensional plane stress element. In this paper...

  3. Model reduction in integrated controls-structures design

    Science.gov (United States)

    Maghami, Peiman G.

    1993-01-01

    It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.

  4. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  5. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  6. A hidden markov model derived structural alphabet for proteins.

    Science.gov (United States)

    Camproux, A C; Gautier, R; Tufféry, P

    2004-06-04

    Understanding and predicting protein structures depends on the complexity and the accuracy of the models used to represent them. We have set up a hidden Markov model that discretizes protein backbone conformation as series of overlapping fragments (states) of four residues length. This approach learns simultaneously the geometry of the states and their connections. We obtain, using a statistical criterion, an optimal systematic decomposition of the conformational variability of the protein peptidic chain in 27 states with strong connection logic. This result is stable over different protein sets. Our model fits well the previous knowledge related to protein architecture organisation and seems able to grab some subtle details of protein organisation, such as helix sub-level organisation schemes. Taking into account the dependence between the states results in a description of local protein structure of low complexity. On an average, the model makes use of only 8.3 states among 27 to describe each position of a protein structure. Although we use short fragments, the learning process on entire protein conformations captures the logic of the assembly on a larger scale. Using such a model, the structure of proteins can be reconstructed with an average accuracy close to 1.1A root-mean-square deviation and for a complexity of only 3. Finally, we also observe that sequence specificity increases with the number of states of the structural alphabet. Such models can constitute a very relevant approach to the analysis of protein architecture in particular for protein structure prediction.

  7. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  8. Search-based model identification of smart-structure damage

    Science.gov (United States)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  9. Validation of the measurement model concept for error structure identification

    International Nuclear Information System (INIS)

    Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.

    2004-01-01

    The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations

  10. Integrative Analysis of Metabolic Models – from Structure to Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Anja, E-mail: hartmann@ipk-gatersleben.de [Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), Gatersleben (Germany); Schreiber, Falk [Monash University, Melbourne, VIC (Australia); Martin-Luther-University Halle-Wittenberg, Halle (Germany)

    2015-01-26

    The characterization of biological systems with respect to their behavior and functionality based on versatile biochemical interactions is a major challenge. To understand these complex mechanisms at systems level modeling approaches are investigated. Different modeling formalisms allow metabolic models to be analyzed depending on the question to be solved, the biochemical knowledge and the availability of experimental data. Here, we describe a method for an integrative analysis of the structure and dynamics represented by qualitative and quantitative metabolic models. Using various formalisms, the metabolic model is analyzed from different perspectives. Determined structural and dynamic properties are visualized in the context of the metabolic model. Interaction techniques allow the exploration and visual analysis thereby leading to a broader understanding of the behavior and functionality of the underlying biological system. The System Biology Metabolic Model Framework (SBM{sup 2} – Framework) implements the developed method and, as an example, is applied for the integrative analysis of the crop plant potato.

  11. The Protein Model Portal--a comprehensive resource for protein structure and model information.

    Science.gov (United States)

    Haas, Juergen; Roth, Steven; Arnold, Konstantin; Kiefer, Florian; Schmidt, Tobias; Bordoli, Lorenza; Schwede, Torsten

    2013-01-01

    The Protein Model Portal (PMP) has been developed to foster effective use of 3D molecular models in biomedical research by providing convenient and comprehensive access to structural information for proteins. Both experimental structures and theoretical models for a given protein can be searched simultaneously and analyzed for structural variability. By providing a comprehensive view on structural information, PMP offers the opportunity to apply consistent assessment and validation criteria to the complete set of structural models available for proteins. PMP is an open project so that new methods developed by the community can contribute to PMP, for example, new modeling servers for creating homology models and model quality estimation servers for model validation. The accuracy of participating modeling servers is continuously evaluated by the Continuous Automated Model EvaluatiOn (CAMEO) project. The PMP offers a unique interface to visualize structural coverage of a protein combining both theoretical models and experimental structures, allowing straightforward assessment of the model quality and hence their utility. The portal is updated regularly and actively developed to include latest methods in the field of computational structural biology. Database URL: http://www.proteinmodelportal.org.

  12. The Protein Model Portal—a comprehensive resource for protein structure and model information

    Science.gov (United States)

    Haas, Juergen; Roth, Steven; Arnold, Konstantin; Kiefer, Florian; Schmidt, Tobias; Bordoli, Lorenza; Schwede, Torsten

    2013-01-01

    The Protein Model Portal (PMP) has been developed to foster effective use of 3D molecular models in biomedical research by providing convenient and comprehensive access to structural information for proteins. Both experimental structures and theoretical models for a given protein can be searched simultaneously and analyzed for structural variability. By providing a comprehensive view on structural information, PMP offers the opportunity to apply consistent assessment and validation criteria to the complete set of structural models available for proteins. PMP is an open project so that new methods developed by the community can contribute to PMP, for example, new modeling servers for creating homology models and model quality estimation servers for model validation. The accuracy of participating modeling servers is continuously evaluated by the Continuous Automated Model EvaluatiOn (CAMEO) project. The PMP offers a unique interface to visualize structural coverage of a protein combining both theoretical models and experimental structures, allowing straightforward assessment of the model quality and hence their utility. The portal is updated regularly and actively developed to include latest methods in the field of computational structural biology. Database URL: http://www.proteinmodelportal.org PMID:23624946

  13. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    Science.gov (United States)

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  14. Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods

    Directory of Open Access Journals (Sweden)

    Lee Sael

    2013-10-01

    Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.

  15. From intuition to statistics in building subsurface structural models

    Science.gov (United States)

    Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.

    2011-01-01

    Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.

  16. Global assemblages and structural models of International Relations

    DEFF Research Database (Denmark)

    Corry, Olaf

    2014-01-01

    -category of assemblages – those constructed as malleable and governable which I call ‘governance-objects’ – is central to structure in international relations. The chapter begins with standard definitions of what structures are – patterns of interaction between elements – and briefly covers the range of models currently...... used to simplify different structures. Next the chapter points to the blindness of most structural theories of IR to the role of assemblages in general and governance-objects in particular. Thirdly, the idea that a polity is constituted precisely by the assemblage of a governance...

  17. [Reparative and neoplastic spheroid cellular structures and their mathematical model].

    Science.gov (United States)

    Kogan, E A; Namiot, V A; Demura, T A; Faĭzullina, N M; Sukhikh, G T

    2014-01-01

    Spheroid cell structures in the cell cultures have been described and are used for studying cell-cell and cell- matrix interactions. At the same time, spheroid cell structure participation in the repair and development of cancer in vivo remains unexplored. The aim of this study was to investigate the cellular composition of spherical structures and their functional significance in the repair of squamous epithelium in human papilloma virus-associated cervical pathology--chronic cervicitis and cervical intraepithelial neoplasia 1-3 degree, and also construct a mathematical model to explain the development and behavior of such spheroid cell structure.

  18. Entropy model of dissipative structure on corporate social responsibility

    Science.gov (United States)

    Li, Zuozhi; Jiang, Jie

    2017-06-01

    Enterprise is prompted to fulfill the social responsibility requirement by the internal and external environment. In this complex system, some studies suggest that firms have an orderly or chaotic entropy exchange behavior. Based on the theory of dissipative structure, this paper constructs the entropy index system of corporate social responsibility(CSR) and explores the dissipative structure of CSR through Brusselator model criterion. Picking up listed companies of the equipment manufacturing, the research shows that CSR has positive incentive to negative entropy and promotes the stability of dissipative structure. In short, the dissipative structure of CSR has a positive impact on the interests of stakeholders and corporate social images.

  19. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  20. PSpice Model of Lightning Strike to a Steel Reinforced Structure

    International Nuclear Information System (INIS)

    Koone, Neil; Condren, Brian

    2003-01-01

    Surges and arcs from lightning can pose hazards to personnel and sensitive equipment, and processes. Steel reinforcement in structures can act as a Faraday cage mitigating lightning effects. Knowing a structure's response to a lightning strike allows hazards associated with lightning to be analyzed. A model of lightning's response in a steel reinforced structure has been developed using PSpice (a commercial circuit simulation). Segments of rebar are modeled as inductors and resistors in series. A program has been written to take architectural information of a steel reinforced structure and 'build' a circuit network that is analogous to the network of reinforcement in a facility. A severe current waveform (simulating a 99th percentile lightning strike), modeled as a current source, is introduced in the circuit network, and potential differences within the structure are determined using PSpice. A visual three-dimensional model of the facility displays the voltage distribution across the structure using color to indicate the potential difference relative to the floor. Clear air arcing distances can be calculated from the voltage distribution using a conservative value for the dielectric breakdown strength of air. Potential validation tests for the model will be presented

  1. Hidden multidimensional social structure modeling applied to biased social perception

    Science.gov (United States)

    Maletić, Slobodan; Zhao, Yi

    2018-02-01

    Intricacies of the structure of social relations are realized by representing a collection of overlapping opinions as a simplicial complex, thus building latent multidimensional structures, through which agents are, virtually, moving as they exchange opinions. The influence of opinion space structure on the distribution of opinions is demonstrated by modeling consensus phenomena when the opinion exchange between individuals may be affected by the false consensus effect. The results indicate that in the cases with and without bias, the road toward consensus is influenced by the structure of multidimensional space of opinions, and in the biased case, complete consensus is achieved. The applications of proposed modeling framework can easily be generalized, as they transcend opinion formation modeling.

  2. Scale modeling of reinforced concrete structures subjected to seismic loading

    International Nuclear Information System (INIS)

    Dove, R.C.

    1983-01-01

    Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed

  3. Joint Modelling of Structural and Functional Brain Networks

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther; Herlau, Tue; Mørup, Morten

    -parametric Bayesian network model which allows for joint modelling and integration of multiple networks. We demonstrate the model’s ability to detect vertices that share structure across networks jointly in functional MRI (fMRI) and diffusion MRI (dMRI) data. Using two fMRI and dMRI scans per subject, we establish...

  4. A Structural Equation Model of Expertise in College Physics

    Science.gov (United States)

    Taasoobshirazi, Gita; Carr, Martha

    2009-01-01

    A model of expertise in physics was tested on a sample of 374 college students in 2 different level physics courses. Structural equation modeling was used to test hypothesized relationships among variables linked to expert performance in physics including strategy use, pictorial representation, categorization skills, and motivation, and these…

  5. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  6. Comparing Structural Brain Connectivity by the Infinite Relational Model

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø; Herlau, Tue; Dyrby, Tim

    2013-01-01

    The growing focus in neuroimaging on analyzing brain connectivity calls for powerful and reliable statistical modeling tools. We examine the Infinite Relational Model (IRM) as a tool to identify and compare structure in brain connectivity graphs by contrasting its performance on graphs from...

  7. Numerical equilibrium analysis for structured consumer resource models

    NARCIS (Netherlands)

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured re- source. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  8. Numerical equilibrium analysis for structured consumer resource models

    NARCIS (Netherlands)

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  9. Model reduction of port-Hamiltonian systems as structured systems

    NARCIS (Netherlands)

    Polyuga, R.V.; Schaft, van der A.J.

    2010-01-01

    The goal of this work is to demonstrate that a specific projection-based model reduction method, which provides an H2 error bound, turns out to be applicable to port-Hamiltonian systems, preserving the port-Hamiltonian structure for the reduced order model, and, as a consequence, passivity.

  10. A new dynamic null model for phylogenetic community structure

    NARCIS (Netherlands)

    Pigot, Alex L; Etienne, Rampal S

    Phylogenies are increasingly applied to identify the mechanisms structuring ecological communities but progress has been hindered by a reliance on statistical null models that ignore the historical process of community assembly. Here, we address this, and develop a dynamic null model of assembly by

  11. Modeling Complex Nesting Structures in International Business Research

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Nielsen, Sabina

    2013-01-01

    hierarchical random coefficient models (RCM) are often used for the analysis of multilevel phenomena, IB issues often result in more complex nested structures. This paper illustrates how cross-nested multilevel modeling allowing for predictor variables and cross-level interactions at multiple (crossed) levels...

  12. A Structural Equation Model of Conceptual Change in Physics

    Science.gov (United States)

    Taasoobshirazi, Gita; Sinatra, Gale M.

    2011-01-01

    A model of conceptual change in physics was tested on introductory-level, college physics students. Structural equation modeling was used to test hypothesized relationships among variables linked to conceptual change in physics including an approach goal orientation, need for cognition, motivation, and course grade. Conceptual change in physics…

  13. Quality of peas modelled by a structural equation system

    DEFF Research Database (Denmark)

    Bech, A. C.; Juhl, H. J.; Hansen, M.

    2000-01-01

    in a PLS structural model with the Total Food Quality Model as starting point. The results show that texture and flavour do have approximately the same effect on consumers' perception of overall quality. Quality development goals for plant breeders would be to optimse perceived flavour directly...

  14. Models of protein-ligand crystal structures: trust, but verify.

    Science.gov (United States)

    Deller, Marc C; Rupp, Bernhard

    2015-09-01

    X-ray crystallography provides the most accurate models of protein-ligand structures. These models serve as the foundation of many computational methods including structure prediction, molecular modelling, and structure-based drug design. The success of these computational methods ultimately depends on the quality of the underlying protein-ligand models. X-ray crystallography offers the unparalleled advantage of a clear mathematical formalism relating the experimental data to the protein-ligand model. In the case of X-ray crystallography, the primary experimental evidence is the electron density of the molecules forming the crystal. The first step in the generation of an accurate and precise crystallographic model is the interpretation of the electron density of the crystal, typically carried out by construction of an atomic model. The atomic model must then be validated for fit to the experimental electron density and also for agreement with prior expectations of stereochemistry. Stringent validation of protein-ligand models has become possible as a result of the mandatory deposition of primary diffraction data, and many computational tools are now available to aid in the validation process. Validation of protein-ligand complexes has revealed some instances of overenthusiastic interpretation of ligand density. Fundamental concepts and metrics of protein-ligand quality validation are discussed and we highlight software tools to assist in this process. It is essential that end users select high quality protein-ligand models for their computational and biological studies, and we provide an overview of how this can be achieved.

  15. Robust simulation of buckled structures using reduced order modeling

    International Nuclear Information System (INIS)

    Wiebe, R.; Perez, R.A.; Spottswood, S.M.

    2016-01-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties. (paper)

  16. Robust simulation of buckled structures using reduced order modeling

    Science.gov (United States)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  17. Optimization of mathematical models for soil structure interaction

    International Nuclear Information System (INIS)

    Vallenas, J.M.; Wong, C.K.; Wong, D.L.

    1993-01-01

    Accounting for soil-structure interaction in the design and analysis of major structures for DOE facilities can involve significant costs in terms of modeling and computer time. Using computer programs like SASSI for modeling major structures, especially buried structures, requires the use of models with a large number of soil-structure interaction nodes. The computer time requirements (and costs) increase as a function of the number of interaction nodes to the third power. The added computer and labor cost for data manipulation and post-processing can further increase the total cost. This paper provides a methodology to significantly reduce the number of interaction nodes. This is achieved by selectively increasing the thickness of soil layers modeled based on the need for the mathematical model to capture as input only those frequencies that can actually be transmitted by the soil media. The authors have rarely found that a model needs to capture frequencies as high as 33 Hz. Typically coarser meshes (and a lesser number of interaction nodes) are adequate

  18. Fast flexible modeling of RNA structure using internal coordinates.

    Science.gov (United States)

    Flores, Samuel Coulbourn; Sherman, Michael A; Bruns, Christopher M; Eastman, Peter; Altman, Russ Biagio

    2011-01-01

    Modeling the structure and dynamics of large macromolecules remains a critical challenge. Molecular dynamics (MD) simulations are expensive because they model every atom independently, and are difficult to combine with experimentally derived knowledge. Assembly of molecules using fragments from libraries relies on the database of known structures and thus may not work for novel motifs. Coarse-grained modeling methods have yielded good results on large molecules but can suffer from difficulties in creating more detailed full atomic realizations. There is therefore a need for molecular modeling algorithms that remain chemically accurate and economical for large molecules, do not rely on fragment libraries, and can incorporate experimental information. RNABuilder works in the internal coordinate space of dihedral angles and thus has time requirements proportional to the number of moving parts rather than the number of atoms. It provides accurate physics-based response to applied forces, but also allows user-specified forces for incorporating experimental information. A particular strength of RNABuilder is that all Leontis-Westhof basepairs can be specified as primitives by the user to be satisfied during model construction. We apply RNABuilder to predict the structure of an RNA molecule with 160 bases from its secondary structure, as well as experimental information. Our model matches the known structure to 10.2 Angstroms RMSD and has low computational expense.

  19. On the structure of the quantum-mechanical probability models

    International Nuclear Information System (INIS)

    Cufaro-Petroni, N.

    1992-01-01

    In this paper the role of the mathematical probability models in the classical and quantum physics in shortly analyzed. In particular the formal structure of the quantum probability spaces (QPS) is contrasted with the usual Kolmogorovian models of probability by putting in evidence the connections between this structure and the fundamental principles of the quantum mechanics. The fact that there is no unique Kolmogorovian model reproducing a QPS is recognized as one of the main reasons of the paradoxical behaviors pointed out in the quantum theory from its early days. 8 refs

  20. Self-consistent mean-field models for nuclear structure

    International Nuclear Information System (INIS)

    Bender, Michael; Heenen, Paul-Henri; Reinhard, Paul-Gerhard

    2003-01-01

    The authors review the present status of self-consistent mean-field (SCMF) models for describing nuclear structure and low-energy dynamics. These models are presented as effective energy-density functionals. The three most widely used variants of SCMF's based on a Skyrme energy functional, a Gogny force, and a relativistic mean-field Lagrangian are considered side by side. The crucial role of the treatment of pairing correlations is pointed out in each case. The authors discuss other related nuclear structure models and present several extensions beyond the mean-field model which are currently used. Phenomenological adjustment of the model parameters is discussed in detail. The performance quality of the SCMF model is demonstrated for a broad range of typical applications

  1. A Parametric Factor Model of the Term Structure of Mortality

    DEFF Research Database (Denmark)

    Haldrup, Niels; Rosenskjold, Carsten Paysen T.

    The prototypical Lee-Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via...... on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson-Siegel term structure model. First, a two-step nonlinear least squares...... procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US...

  2. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    Science.gov (United States)

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  3. Towards structural models of molecular recognition in olfactory receptors.

    Science.gov (United States)

    Afshar, M; Hubbard, R E; Demaille, J

    1998-02-01

    The G protein coupled receptors (GPCR) are an important class of proteins that act as signal transducers through the cytoplasmic membrane. Understanding the structure and activation mechanism of these proteins is crucial for understanding many different aspects of cellular signalling. The olfactory receptors correspond to the largest family of GPCRs. Very little is known about how the structures of the receptors govern the specificity of interaction which enables identification of particular odorant molecules. In this paper, we review recent developments in two areas of molecular modelling: methods for modelling the configuration of trans-membrane helices and methods for automatic docking of ligands into receptor structures. We then show how a subset of these methods can be combined to construct a model of a rat odorant receptor interacting with lyral for which experimental data are available. This modelling can help us make progress towards elucidating the specificity of interactions between receptors and odorant molecules.

  4. An analytically solvable model for rapid evolution of modular structure.

    Directory of Open Access Journals (Sweden)

    Nadav Kashtan

    2009-04-01

    Full Text Available Biological systems often display modularity, in the sense that they can be decomposed into nearly independent subsystems. Recent studies have suggested that modular structure can spontaneously emerge if goals (environments change over time, such that each new goal shares the same set of sub-problems with previous goals. Such modularly varying goals can also dramatically speed up evolution, relative to evolution under a constant goal. These studies were based on simulations of model systems, such as logic circuits and RNA structure, which are generally not easy to treat analytically. We present, here, a simple model for evolution under modularly varying goals that can be solved analytically. This model helps to understand some of the fundamental mechanisms that lead to rapid emergence of modular structure under modularly varying goals. In particular, the model suggests a mechanism for the dramatic speedup in evolution observed under such temporally varying goals.

  5. Fluid-structure interaction and structural analyses using a comprehensive mitral valve model with 3D chordal structure.

    Science.gov (United States)

    Toma, Milan; Einstein, Daniel R; Bloodworth, Charles H; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2017-04-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be 'invisible' to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Modeling survival: application of the Andersen-Gill model to Yellowstone grizzly bears

    Science.gov (United States)

    Johnson, Christopher J.; Boyce, Mark S.; Schwartz, Charles C.; Haroldson, Mark A.

    2004-01-01

     Wildlife ecologists often use the Kaplan-Meier procedure or Cox proportional hazards model to estimate survival rates, distributions, and magnitude of risk factors. The Andersen-Gill formulation (A-G) of the Cox proportional hazards model has seen limited application to mark-resight data but has a number of advantages, including the ability to accommodate left-censored data, time-varying covariates, multiple events, and discontinuous intervals of risks. We introduce the A-G model including structure of data, interpretation of results, and assessment of assumptions. We then apply the model to 22 years of radiotelemetry data for grizzly bears (Ursus arctos) of the Greater Yellowstone Grizzly Bear Recovery Zone in Montana, Idaho, and Wyoming, USA. We used Akaike's Information Criterion (AICc) and multi-model inference to assess a number of potentially useful predictive models relative to explanatory covariates for demography, human disturbance, and habitat. Using the most parsimonious models, we generated risk ratios, hypothetical survival curves, and a map of the spatial distribution of high-risk areas across the recovery zone. Our results were in agreement with past studies of mortality factors for Yellowstone grizzly bears. Holding other covariates constant, mortality was highest for bears that were subjected to repeated management actions and inhabited areas with high road densities outside Yellowstone National Park. Hazard models developed with covariates descriptive of foraging habitats were not the most parsimonious, but they suggested that high-elevation areas offered lower risks of mortality when compared to agricultural areas.

  7. A Surface Modeling Paradigm for Electromagnetic Applications in Aerospace Structures

    OpenAIRE

    Jha, RM; Bokhari, SA; Sudhakar, V; Mahapatra, PR

    1989-01-01

    A systematic approach has been developed to model the surfaces encountered in aerospace engineering for EM applications. The basis of this modeling is the quadric canonical shapes which are the coordinate surfaces of the Eisenhart Coordinate systems. The building blocks are visualized as sections of quadric cylinders and surfaces of revolution. These truncated quadrics can successfully model realistic aerospace structures which are termed a s hybrid quadrics, of which the satellite launch veh...

  8. Code Development for Control Design Applications: Phase I: Structural Modeling

    International Nuclear Information System (INIS)

    Bir, G. S.; Robinson, M.

    1998-01-01

    The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization

  9. Modelling of the Deterioration of Reinforced Concrete Structures

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    Stochastic modelling of the deterioration of reinforced concrete structures is addressed in this paper on basis of a detailed modelling of corrosion initiation and corrosion cracking. It is proposed that modelling of the deterioration of concrete should be based on a sound understanding...... of the physical and chemical properties of the concrete. The relationship between rebar corrosion and crack width is investigated. A new service life definition based on evolution of the corrosion crack width is proposed....

  10. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  11. Discriminative training of self-structuring hidden control neural models

    DEFF Research Database (Denmark)

    Sørensen, Helge Bjarup Dissing; Hartmann, Uwe; Hunnerup, Preben

    1995-01-01

    This paper presents a new training algorithm for self-structuring hidden control neural (SHC) models. The SHC models were trained non-discriminatively for speech recognition applications. Better recognition performance can generally be achieved, if discriminative training is applied instead. Thus...... we developed a discriminative training algorithm for SHC models, where each SHC model for a specific speech pattern is trained with utterances of the pattern to be recognized and with other utterances. The discriminative training of SHC neural models has been tested on the TIDIGITS database...

  12. Practical Soil-Shallow Foundation Model for Nonlinear Structural Analysis

    Directory of Open Access Journals (Sweden)

    Moussa Leblouba

    2016-01-01

    Full Text Available Soil-shallow foundation interaction models that are incorporated into most structural analysis programs generally lack accuracy and efficiency or neglect some aspects of foundation behavior. For instance, soil-shallow foundation systems have been observed to show both small and large loops under increasing amplitude load reversals. This paper presents a practical macroelement model for soil-shallow foundation system and its stability under simultaneous horizontal and vertical loads. The model comprises three spring elements: nonlinear horizontal, nonlinear rotational, and linear vertical springs. The proposed macroelement model was verified using experimental test results from large-scale model foundations subjected to small and large cyclic loading cases.

  13. Continuous Time Structural Equation Modeling with R Package ctsem

    Directory of Open Access Journals (Sweden)

    Charles C. Driver

    2017-04-01

    Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.

  14. Systematics and morphological evolution within the moss family Bryaceae: a comparison between parsimony and Bayesian methods for reconstruction of ancestral character states.

    Science.gov (United States)

    Pedersen, Niklas; Holyoak, David T; Newton, Angela E

    2007-06-01

    The Bryaceae are a large cosmopolitan moss family including genera of significant morphological and taxonomic complexity. Phylogenetic relationships within the Bryaceae were reconstructed based on DNA sequence data from all three genomic compartments. In addition, maximum parsimony and Bayesian inference were employed to reconstruct ancestral character states of 38 morphological plus four habitat characters and eight insertion/deletion events. The recovered phylogenetic patterns are generally in accord with previous phylogenies based on chloroplast DNA sequence data and three major clades are identified. The first clade comprises Bryum bornholmense, B. rubens, B. caespiticium, and Plagiobryum. This corroborates the hypothesis suggested by previous studies that several Bryum species are more closely related to Plagiobryum than to the core Bryum species. The second clade includes Acidodontium, Anomobryum, and Haplodontium, while the third clade contains the core Bryum species plus Imbribryum. Within the latter clade, B. subapiculatum and B. tenuisetum form the sister clade to Imbribryum. Reconstructions of ancestral character states under maximum parsimony and Bayesian inference suggest fourteen morphological synapomorphies for the ingroup and synapomorphies are detected for most clades within the ingroup. Maximum parsimony and Bayesian reconstructions of ancestral character states are mostly congruent although Bayesian inference shows that the posterior probability of ancestral character states may decrease dramatically when node support is taken into account. Bayesian inference also indicates that reconstructions may be ambiguous at internal nodes for highly polymorphic characters.

  15. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  16. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    Science.gov (United States)

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  17. A skeleton model for the MJO with refined vertical structure

    Science.gov (United States)

    Thual, Sulian; Majda, Andrew J.

    2016-05-01

    The Madden-Julian oscillation (MJO) is the dominant mode of variability in the tropical atmosphere on intraseasonal timescales and planetary spatial scales. The skeleton model is a minimal dynamical model that recovers robustly the most fundamental MJO features of (I) a slow eastward speed of roughly 5 {ms}^{-1}, (II) a peculiar dispersion relation with dω /dk ≈ 0, and (III) a horizontal quadrupole vortex structure. This model depicts the MJO as a neutrally-stable atmospheric wave that involves a simple multiscale interaction between planetary dry dynamics, planetary lower-tropospheric moisture and the planetary envelope of synoptic-scale activity. Here we propose and analyse an extended version of the skeleton model with additional variables accounting for the refined vertical structure of the MJO in nature. The present model reproduces qualitatively the front-to-rear vertical structure of the MJO found in nature, with MJO events marked by a planetary envelope of convective activity transitioning from the congestus to the deep to the stratiform type, in addition to a front-to-rear structure of moisture, winds and temperature. Despite its increased complexity the present model retains several interesting features of the original skeleton model such as a conserved energy and similar linear solutions. We further analyze a model version with a simple stochastic parametrization for the unresolved details of synoptic-scale activity. The stochastic model solutions show intermittent initiation, propagation and shut down of MJO wave trains, as in previous studies, in addition to MJO events with a front-to-rear vertical structure of varying intensity and characteristics from one event to another.

  18. Interactive physically-based structural modeling of hydrocarbon systems

    International Nuclear Information System (INIS)

    Bosson, Mael; Grudinin, Sergei; Bouju, Xavier; Redon, Stephane

    2012-01-01

    Hydrocarbon systems have been intensively studied via numerical methods, including electronic structure computations, molecular dynamics and Monte Carlo simulations. Typically, these methods require an initial structural model (atomic positions and types, topology, etc.) that may be produced using scripts and/or modeling tools. For many systems, however, these building methods may be ineffective, as the user may have to specify the positions of numerous atoms while maintaining structural plausibility. In this paper, we present an interactive physically-based modeling tool to construct structural models of hydrocarbon systems. As the user edits the geometry of the system, atomic positions are also influenced by the Brenner potential, a well-known bond-order reactive potential. In order to be able to interactively edit systems containing numerous atoms, we introduce a new adaptive simulation algorithm, as well as a novel algorithm to incrementally update the forces and the total potential energy based on the list of updated relative atomic positions. The computational cost of the adaptive simulation algorithm depends on user-defined error thresholds, and our potential update algorithm depends linearly with the number of updated bonds. This allows us to enable efficient physically-based editing, since the computational cost is decoupled from the number of atoms in the system. We show that our approach may be used to effectively build realistic models of hydrocarbon structures that would be difficult or impossible to produce using other tools.

  19. Algebraic Modeling of Topological and Computational Structures and Applications

    CERN Document Server

    Theodorou, Doros; Stefaneas, Petros; Kauffman, Louis

    2017-01-01

    This interdisciplinary book covers a wide range of subjects, from pure mathematics (knots, braids, homotopy theory, number theory) to more applied mathematics (cryptography, algebraic specification of algorithms, dynamical systems) and concrete applications (modeling of polymers and ionic liquids, video, music and medical imaging). The main mathematical focus throughout the book is on algebraic modeling with particular emphasis on braid groups. The research methods include algebraic modeling using topological structures, such as knots, 3-manifolds, classical homotopy groups, and braid groups. The applications address the simulation of polymer chains and ionic liquids, as well as the modeling of natural phenomena via topological surgery. The treatment of computational structures, including finite fields and cryptography, focuses on the development of novel techniques. These techniques can be applied to the design of algebraic specifications for systems modeling and verification. This book is the outcome of a w...

  20. Structural identifiability analysis of a cardiovascular system model.

    Science.gov (United States)

    Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas

    2016-05-01

    The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. A resource for benchmarking the usefulness of protein structure models

    Directory of Open Access Journals (Sweden)

    Carbajo Daniel

    2012-08-01

    Full Text Available Abstract Background Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. Results This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. Conclusions The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. Implementation, availability and requirements Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php. Operating system(s: Platform independent. Programming language: Perl-BioPerl (program; mySQL, Perl DBI and DBD modules (database; php, JavaScript, Jmol scripting (web server. Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet and PSAIA. License: Free. Any

  2. Constitutive model and electroplastic analysis of structures under cyclic loading

    International Nuclear Information System (INIS)

    Wang, X.; Lei, Y; Du, Q.

    1989-01-01

    Many engineering structures in nuclear reactors, thermal power stations, chemical plants and aerospace vehicles are subjected to cyclic mechanic-thermal loading, which is the main cause of structural fatigue failure. Over the past twenty years, designers and researchers have paid great attention to the research on life prediction and elastoplastic analysis of structures under cyclic loading. One of the key problems in elastoplastic analysis is to construct a reasonable constitutive model for cyclic plasticity. In the paper, the constitutive equations are briefly outlined. Then, the model is implemented in a finite element code to predict the response of cyclic loaded structural components such as a double-edge-notched plate, a grooved bar and a nozzle in spherical shell. Numerical results are compared with those from other theories and experiments

  3. Model tool to describe chemical structures in XML format utilizing structural fragments and chemical ontology.

    Science.gov (United States)

    Sankar, Punnaivanam; Alain, Krief; Aghila, Gnanasekaran

    2010-05-24

    We have developed a model structure-editing tool, ChemEd, programmed in JAVA, which allows drawing chemical structures on a graphical user interface (GUI) by selecting appropriate structural fragments defined in a fragment library. The terms representing the structural fragments are organized in fragment ontology to provide a conceptual support. ChemEd describes the chemical structure in an XML document (ChemFul) with rich semantics explicitly encoding the details of the chemical bonding, the hybridization status, and the electron environment around each atom. The document can be further processed through suitable algorithms and with the support of external chemical ontologies to generate understandable reports about the functional groups present in the structure and their specific environment.

  4. Hide and vanish: data sets where the most parsimonious tree is known but hard to find, and their implications for tree search methods.

    Science.gov (United States)

    Goloboff, Pablo A

    2014-10-01

    Three different types of data sets, for which the uniquely most parsimonious tree can be known exactly but is hard to find with heuristic tree search methods, are studied. Tree searches are complicated more by the shape of the tree landscape (i.e. the distribution of homoplasy on different trees) than by the sheer abundance of homoplasy or character conflict. Data sets of Type 1 are those constructed by Radel et al. (2013). Data sets of Type 2 present a very rugged landscape, with narrow peaks and valleys, but relatively low amounts of homoplasy. For such a tree landscape, subjecting the trees to TBR and saving suboptimal trees produces much better results when the sequence of clipping for the tree branches is randomized instead of fixed. An unexpected finding for data sets of Types 1 and 2 is that starting a search from a random tree instead of a random addition sequence Wagner tree may increase the probability that the search finds the most parsimonious tree; a small artificial example where these probabilities can be calculated exactly is presented. Data sets of Type 3, the most difficult data sets studied here, comprise only congruent characters, and a single island with only one most parsimonious tree. Even if there is a single island, missing entries create a very flat landscape which is difficult to traverse with tree search algorithms because the number of equally parsimonious trees that need to be saved and swapped to effectively move around the plateaus is too large. Minor modifications of the parameters of tree drifting, ratchet, and sectorial searches allow travelling around these plateaus much more efficiently than saving and swapping large numbers of equally parsimonious trees with TBR. For these data sets, two new related criteria for selecting taxon addition sequences in Wagner trees (the "selected" and "informative" addition sequences) produce much better results than the standard random or closest addition sequences. These new methods for Wagner

  5. A STRUCTURAL MODEL OF AN EXCAVATOR WORKFLOW CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Gurko

    2016-12-01

    Full Text Available Earthwork improving is connected with excavators automation. In this paper, on the basis of the analysis of problems that a hydraulic excavator control system have to solve, the hierarchical structure of a control system have been proposed. The decomposition of the control process had been executed that allowed to develop the structural model which reflects the characteristics of a multilevel space-distributed control system of an excavator workflow.

  6. Plant lessons: exploring ABCB functionality through structural modeling

    Directory of Open Access Journals (Sweden)

    Aurélien eBailly

    2012-01-01

    Full Text Available In contrast to mammalian ABCB1 proteins, narrow substrate specificity has been extensively documented for plant orthologs shown to catalyze the transport of the plant hormone, auxin. Using the crystal structures of the multidrug exporters Sav1866 and MmABCB1 as templates, we have developed structural models of plant ABCB proteins with a common architecture. Comparisons of these structures identified kingdom-specific candidate substrate-binding regions within the translocation chamber formed by the transmembrane domains of ABCBs from the model plant Arabidopsis. These results suggest an early evolutionary divergence of plant and mammalian ABCBs. Validation of these models becomes a priority for efforts to elucidate ABCB function and manipulate this class of transporters to enhance plant productivity and quality.

  7. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression.......The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...

  8. Modeling structural change in spatial system dynamics: A Daisyworld example.

    Science.gov (United States)

    Neuwirth, C; Peck, A; Simonović, S P

    2015-03-01

    System dynamics (SD) is an effective approach for helping reveal the temporal behavior of complex systems. Although there have been recent developments in expanding SD to include systems' spatial dependencies, most applications have been restricted to the simulation of diffusion processes; this is especially true for models on structural change (e.g. LULC modeling). To address this shortcoming, a Python program is proposed to tightly couple SD software to a Geographic Information System (GIS). The approach provides the required capacities for handling bidirectional and synchronized interactions of operations between SD and GIS. In order to illustrate the concept and the techniques proposed for simulating structural changes, a fictitious environment called Daisyworld has been recreated in a spatial system dynamics (SSD) environment. The comparison of spatial and non-spatial simulations emphasizes the importance of considering spatio-temporal feedbacks. Finally, practical applications of structural change models in agriculture and disaster management are proposed.

  9. Soil Structure - A Neglected Component of Land-Surface Models

    Science.gov (United States)

    Fatichi, S.; Or, D.; Walko, R. L.; Vereecken, H.; Kollet, S. J.; Young, M.; Ghezzehei, T. A.; Hengl, T.; Agam, N.; Avissar, R.

    2017-12-01

    Soil structure is largely absent in most standard sampling and measurements and in the subsequent parameterization of soil hydraulic properties deduced from soil maps and used in Earth System Models. The apparent omission propagates into the pedotransfer functions that deduce parameters of soil hydraulic properties primarily from soil textural information. Such simple parameterization is an essential ingredient in the practical application of any land surface model. Despite the critical role of soil structure (biopores formed by decaying roots, aggregates, etc.) in defining soil hydraulic functions, only a few studies have attempted to incorporate soil structure into models. They mostly looked at the effects on preferential flow and solute transport pathways at the soil profile scale; yet, the role of soil structure in mediating large-scale fluxes remains understudied. Here, we focus on rectifying this gap and demonstrating potential impacts on surface and subsurface fluxes and system wide eco-hydrologic responses. The study proposes a systematic way for correcting the soil water retention and hydraulic conductivity functions—accounting for soil-structure—with major implications for near saturated hydraulic conductivity. Modification to the basic soil hydraulic parameterization is assumed as a function of biological activity summarized by Gross Primary Production. A land-surface model with dynamic vegetation is used to carry out numerical simulations with and without the role of soil-structure for 20 locations characterized by different climates and biomes across the globe. Including soil structure affects considerably the partition between infiltration and runoff and consequently leakage at the base of the soil profile (recharge). In several locations characterized by wet climates, a few hundreds of mm per year of surface runoff become deep-recharge accounting for soil-structure. Changes in energy fluxes, total evapotranspiration and vegetation productivity

  10. Geological-structural models used in SR 97. Uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Saksa, P.; Nummela, J. [FINTACT Oy (Finland)

    1998-10-01

    The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km{sup 3}. Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that

  11. Geological-structural models used in SR 97. Uncertainty analysis

    International Nuclear Information System (INIS)

    Saksa, P.; Nummela, J.

    1998-10-01

    The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km 3 . Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that the

  12. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  13. Coarse-grained description of cosmic structure from Szekeres models

    International Nuclear Information System (INIS)

    Sussman, Roberto A.; Gaspar, I. Delgado; Hidalgo, Juan Carlos

    2016-01-01

    We show that the full dynamical freedom of the well known Szekeres models allows for the description of elaborated 3-dimensional networks of cold dark matter structures (over-densities and/or density voids) undergoing ''pancake'' collapse. By reducing Einstein's field equations to a set of evolution equations, which themselves reduce in the linear limit to evolution equations for linear perturbations, we determine the dynamics of such structures, with the spatial comoving location of each structure uniquely specified by standard early Universe initial conditions. By means of a representative example we examine in detail the density contrast, the Hubble flow and peculiar velocities of structures that evolved, from linear initial data at the last scattering surface, to fully non-linear 10–20 Mpc scale configurations today. To motivate further research, we provide a qualitative discussion on the connection of Szekeres models with linear perturbations and the pancake collapse of the Zeldovich approximation. This type of structure modelling provides a coarse grained—but fully relativistic non-linear and non-perturbative —description of evolving large scale cosmic structures before their virialisation, and as such it has an enormous potential for applications in cosmological research

  14. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  15. Structural Equation Modeling with Mplus Basic Concepts, Applications, and Programming

    CERN Document Server

    Byrne, Barbara M

    2011-01-01

    Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Versions 5 & 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including: an explanation of the issues addressed illustrated and annotated testing of the hypothesized and post hoc models expl

  16. Selected Aspects of Computer Modeling of Reinforced Concrete Structures

    Directory of Open Access Journals (Sweden)

    Szczecina M.

    2016-03-01

    Full Text Available The paper presents some important aspects concerning material constants of concrete and stages of modeling of reinforced concrete structures. The problems taken into account are: a choice of proper material model for concrete, establishing of compressive and tensile behavior of concrete and establishing the values of dilation angle, fracture energy and relaxation time for concrete. Proper values of material constants are fixed in simple compression and tension tests. The effectiveness and correctness of applied model is checked on the example of reinforced concrete frame corners under opening bending moment. Calculations are performed in Abaqus software using Concrete Damaged Plasticity model of concrete.

  17. Mechanical modeling of the growth of salt structures

    Energy Technology Data Exchange (ETDEWEB)

    Alfaro, Ruben Alberto Mazariegos [Texas A & M Univ., College Station, TX (United States)

    1993-05-01

    A 2D numerical model for studying the morphology and history of salt structures by way of computer simulations is presented. The model is based on conservation laws for physical systems, a fluid marker equation to keep track of the salt/sediments interface, and two constitutive laws for rocksalt. When buoyancy alone is considered, the fluid-assisted diffusion model predicts evolution of salt structures 2.5 times faster than the power-law creep model. Both rheological laws predict strain rates of the order of 4.0 x 10-15 s-1 for similar structural maturity level of salt structures. Equivalent stresses and viscosities predicted by the fluid-assisted diffusion law are 102 times smaller than those predicted by the power-law creep rheology. Use of East Texas Basin sedimentation rates and power-law creep rheology indicate that differential loading is an effective mechanism to induce perturbations that amplify and evolve to mature salt structures, similar to those observed under natural geological conditions.

  18. MAGDM linear-programming models with distinct uncertain preference structures.

    Science.gov (United States)

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  19. Structural characterization and viscoelastic constitutive modeling of skin.

    Science.gov (United States)

    Sherman, Vincent R; Tang, Yizhe; Zhao, Shiteng; Yang, Wen; Meyers, Marc A

    2017-04-15

    A fascinating material, skin has a tensile response which exhibits an extended toe region of minimal stress up to nominal strains that, in some species, exceed 1, followed by significant stiffening until a roughly linear region. The large toe region has been attributed to its unique structure, consisting of a network of curved collagen fibers. Investigation of the structure of rabbit skin reveals that it consists of layers of wavy fibers, each one with a characteristic orientation. Additionally, the existence of two preferred layer orientations is suggested based on the results of small angle X-ray scattering. These observations are used to construct a viscoelastic model consisting of collagen in two orientations, which leads to an in-plane anisotropic response. The structure-based model presented incorporates the elastic straightening and stretching of fibrils, their rotation towards the tensile axis, and the viscous effects which occur in the matrix of the skin due to interfibrillar and interlamellar sliding. The model is shown to effectively capture key features which dictate the mechanical response of skin. Examination by transmission and scanning electron microscopy of rabbit dermis enabled the identification of the key elements in its structure. The organization of collagen fibrils into flat fibers was identified and incorporated into a constitutive model that reproduces the mechanical response of skin. This enhanced quantitative predictive capability can be used in the design of synthetic skin and skin-like structures. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  20. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...