WorldWideScience

Sample records for linear modeling analyses

  1. A comparison of linear tyre models for analysing shimmy

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  2. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  3. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  4. Material model for non-linear finite element analyses of large concrete structures

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  5. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  6. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  7. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  8. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  9. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  10. Linear Models

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  11. linear-quadratic-linear model

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  12. Entropic potential field formed for a linear-motor protein near a filament: Statistical-mechanical analyses using simple models.

    Amano, Ken-Ichi; Yoshidome, Takashi; Iwaki, Mitsuhiro; Suzuki, Makoto; Kinoshita, Masahiro

    2010-07-28

    We report a new progress in elucidating the mechanism of the unidirectional movement of a linear-motor protein (e.g., myosin) along a filament (e.g., F-actin). The basic concept emphasized here is that a potential field is entropically formed for the protein on the filament immersed in solvent due to the effect of the translational displacement of solvent molecules. The entropic potential field is strongly dependent on geometric features of the protein and the filament, their overall shapes as well as details of the polyatomic structures. The features and the corresponding field are judiciously adjusted by the binding of adenosine triphosphate (ATP) to the protein, hydrolysis of ATP into adenosine diphosphate (ADP)+Pi, and release of Pi and ADP. As the first step, we propose the following physical picture: The potential field formed along the filament for the protein without the binding of ATP or ADP+Pi to it is largely different from that for the protein with the binding, and the directed movement is realized by repeated switches from one of the fields to the other. To illustrate the picture, we analyze the spatial distribution of the entropic potential between a large solute and a large body using the three-dimensional integral equation theory. The solute is modeled as a large hard sphere. Two model filaments are considered as the body: model 1 is a set of one-dimensionally connected large hard spheres and model 2 is a double helical structure formed by two sets of connected large hard spheres. The solute and the filament are immersed in small hard spheres forming the solvent. The major findings are as follows. The solute is strongly confined within a narrow space in contact with the filament. Within the space there are locations with sharply deep local potential minima along the filament, and the distance between two adjacent locations is equal to the diameter of the large spheres constituting the filament. The potential minima form a ringlike domain in model 1

  13. Group-Level EEG-Processing Pipeline for Flexible Single Trial-Based Analyses Including Linear Mixed Models.

    Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha

    2018-01-01

    Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.

  14. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries; Rolfes, R.; Jansen, E.L.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  15. Linear models with R

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  16. Foundations of linear and generalized linear models

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  17. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  18. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  19. Finite element analyses of a linear-accelerator electron gun

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  20. Finite element analyses of a linear-accelerator electron gun

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wasy, A. [Department of Mechanical Engineering, Changwon National University, Changwon 641773 (Korea, Republic of); Islam, G. U. [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Zhou, Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2014-02-15

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  1. Finite element analyses of a linear-accelerator electron gun

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-01-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator

  2. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  3. Analysing the mechanical performance and growth adaptation of Norway spruce using a non-linear finite-element model and experimental data.

    Lundström, T; Jonas, T; Volkwein, A

    2008-01-01

    Thirteen Norway spruce [Picea abies (L.) Karst.] trees of different size, age, and social status, and grown under varying conditions, were investigated to see how they react to complex natural static loading under summer and winter conditions, and how they have adapted their growth to such combinations of load and tree state. For this purpose a non-linear finite-element model and an extensive experimental data set were used, as well as a new formulation describing the degree to which the exploitation of the bending stress capacity is uniform. The three main findings were: material and geometric non-linearities play important roles when analysing tree deflections and critical loads; the strengths of the stem and the anchorage mutually adapt to the local wind acting on the tree crown in the forest canopy; and the radial stem growth follows a mechanically high-performance path because it adapts to prevailing as well as acute seasonal combinations of the tree state (e.g. frozen or unfrozen stem and anchorage) and load (e.g. wind and vertical and lateral snow pressure). Young trees appeared to adapt to such combinations in a more differentiated way than older trees. In conclusion, the mechanical performance of the Norway spruce studied was mostly very high, indicating that their overall growth had been clearly influenced by the external site- and tree-specific mechanical stress.

  4. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  5. Non linear viscoelastic models

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  6. Multicollinearity in hierarchical linear models.

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A primer on linear models

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  8. Matrix algebra for linear models

    Gruber, Marvin H J

    2013-01-01

    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  9. Dynamic Linear Models with R

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  10. Introduction to generalized linear models

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  11. LINEAR AND NON-LINEAR ANALYSES OF CABLE-STAYED STEEL FRAME SUBJECTED TO SEISMIC ACTIONS

    Marko Đuran

    2017-01-01

    Full Text Available In this study, linear and non-linear dynamic analyses of a cable-stayed steel frame subjected to seismic actions are performed. The analyzed cable-stayed frame is the main supporting structure of a wide-span sports hall. Since the complex dynamic behavior of cable-stayed structures results in significant geometric nonlinearity, a nonlinear time history analysis is conducted. As a reference, an analysis using the European standard approach, the so-called linear modal response spectrum method, is also performed. The analyses are conducted for different seismic actions considering dependence on the response spectrums for various ground types and the corresponding artificially generated accelerograms. Despite fundamental differences between the two analyses, results indicate that the modal response spectrum analysis is surprisingly consistent with the internal forces and bending moment distributions of the nonlinear time history analysis. However, significantly smaller values of bending moments, internal forces, and displacements are obtained with the response spectrum analysis.

  12. (Non) linear regression modelling

    Cizek, P.; Gentle, J.E.; Hardle, W.K.; Mori, Y.

    2012-01-01

    We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1,…,Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1,…,Xp),p ∈ N, which explain or

  13. Explorative methods in linear models

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  14. Generalized, Linear, and Mixed Models

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  15. Linear mixed models for longitudinal data

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  16. Sparse Linear Identifiable Multivariate Modeling

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  17. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  18. Parameterized Linear Longitudinal Airship Model

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  19. Non-linear Loudspeaker Unit Modelling

    Pedersen, Bo Rohde; Agerkvist, Finn T.

    2008-01-01

    Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of thr...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....

  20. Decomposable log-linear models

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  1. Linear and Generalized Linear Mixed Models and Their Applications

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  2. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

    Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

    2012-07-01

    The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in

  3. Non-linear finite element analyses applicable for the design of large reinforced concrete structures

    Engen, M; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik

    2017-01-01

    In order to make non-linear finite element analyses applicable during assessments of the ultimate load capacity or the structural reliability of large reinforced concrete structures, there is need for an efficient solution strategy with a low modelling uncertainty. A solution strategy comprises

  4. Modelos linear e não linear em análises genéticas para sobrevivência de crias de ovinos da raça Santa Inês Linear and nonlinear models in genetic analyses of lamb survival in the Santa Inês hair sheep breed

    W.H. Sousa

    1999-06-01

    Full Text Available Registros de sobrevivência do nascimento ao desmame de 3846 crias de ovinos da raça Santa Inês foram analisados por modelos de reprodutor linear e não linear (modelo de limiar, para estimar componentes de variância e herdabilidade. Os modelos usados para sobrevivência, analisada como característica da cria, incluíram os efeitos fixos de sexo, da combinação tipo de nascimento-criação da cria e da idade da ovelha ao parto, efeito da covariável peso da cria ao nascer e efeitos aleatórios de reprodutor, da classe rebanho-ano-estação e do resíduo. Componentes de variância para o modelo linear foram estimados pelo método da máxima verossimilhança restrita (REML e para o modelo não linear por uma aproximação da máxima verossimilhança marginal (MML, pelo programa CMMAT2. O coeficiente de herdabilidade (h² estimado pelo modelo de limiar foi de 0,29, e pelo modelo linear, 0,14. A correlação de ordem de Spearman entre as capacidades de transmissão dos reprodutores, com base nos dois modelos foi de 0,96. As estimativas de h² obtidas indicam a possibilidade de se obter, por seleção, ganho genético para sobrevivência.Records of 3,846 lambs survival from birth to weaning of Santa Inês hair sheep breed, were analyzed by linear and non linear sire models (threshold model to estimate variance components and heritability (h². The models that were used to analyze survival, considered in this study as a lamb trait, included the fixed effects of sex of the lamb, combination of type of birth-rearing of lamb, and age of ewe, birth weight of lamb as covariate, and random effects of sire, herd-year-season and residual. Variance components were obtained using restricted maximum likelihood (REML, in linear model and marginal maximum likelihood in threshold model through CMMAT2 program. Estimate of heritability (h² obtained by threshold model was 0.29 and by linear model was 0.14. Rank correlation of Spearman, between sire solutions

  5. Modelling Loudspeaker Non-Linearities

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  6. Multivariate covariance generalized linear models

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  7. Linear factor copula models and their properties

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  8. Linear factor copula models and their properties

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  9. Nonabelian Gauged Linear Sigma Model

    Yongbin RUAN

    2017-01-01

    The gauged linear sigma model (GLSM for short) is a 2d quantum field theory introduced by Witten twenty years ago.Since then,it has been investigated extensively in physics by Hori and others.Recently,an algebro-geometric theory (for both abelian and nonabelian GLSMs) was developed by the author and his collaborators so that he can start to rigorously compute its invariants and check against physical predications.The abelian GLSM was relatively better understood and is the focus of current mathematical investigation.In this article,the author would like to look over the horizon and consider the nonabelian GLSM.The nonabelian case possesses some new features unavailable to the abelian GLSM.To aid the future mathematical development,the author surveys some of the key problems inspired by physics in the nonabelian GLSM.

  10. Modeling patterns in data using linear and related models

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  11. Genetic analyses of linear profiling data on 3-year-old Swedish Warmblood horses.

    Viklund, Å; Eriksson, S

    2018-02-01

    A linear profiling protocol was introduced in 2013 at tests for 3-year-old Swedish Warmblood horses. In this protocol, traits are subjectively described on a nine-point linear scale from one biological extreme to the other. This complements the traditional scoring where horses are evaluated in relation to the breeding objective. This study aimed to investigate the suitability of the linear information for genetic evaluation. Data on 22 conformation traits, 17 movement traits, 14 jumping traits and one temperament trait from 3,410 horses tested between 2013 and 2016 were analysed using an animal model. For conformation traits, the heritabilities ranged from 0.10 for description of hock joint from behind to 0.52 for shape of the neck. For movement traits, the highest heritability (0.54) was estimated for elasticity in trot and the lowest (0.08) for energy in walk. The heritabilities for jumping traits ranged from 0.05 for the ability to focus on the assignment to 0.57 for scope. Genetic correlations between linear traits and corresponding traditionally scored traits were strong (-0.37 to in many cases <-0.9). The results show that the linear information is suitable for genetic evaluation and can be a useful tool for breeders. © 2018 Blackwell Verlag GmbH.

  12. Multivariate generalized linear mixed models using R

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  13. Nonlinear Modeling by Assembling Piecewise Linear Models

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  14. Linear Logistic Test Modeling with R

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  15. Aspects of general linear modelling of migration.

    Congdon, P

    1992-01-01

    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  16. Core seismic behaviour: linear and non-linear models

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  17. Composite Linear Models | Division of Cancer Prevention

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  18. Linear latent variable models: the lava-package

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  19. Assessment of ethylene vinyl-acetate copolymer samples exposed to γ-rays via linearity analyses

    Oliveira, Lucas N. de; Nascimento, Eriberto O. do; Schimidt, Fernando [Instituto Federal de Educação, Ciência e Tecnologia de Goiás (IFG), Goiânia, GO (Brazil); Antonio, Patrícia L.; Caldas, Linda V.E., E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    Materials with the potential to become dosimeters are of interest in radiation physics. In this research, the materials were analyzed and compared in relation to their linearity ranges. Samples of ethylene vinyl-acetate copolymer (EVA) were irradiated with doses from 10 Gy to 10 kGy using a {sup 60}Co Gamma-Cell system 220 and evaluated with the FTIR technique. The linearity analyses were applied through two methodologies, searching for linear regions in their response. The results show that both applied analyses indicate linear regions in defined dose interval. The radiation detectors EVA can be useful for radiation dosimetry in intermediate and high doses. (author)

  20. Actuarial statistics with generalized linear mixed models

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  1. Graphical models for genetic analyses

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  2. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses

    Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine

    2011-01-01

    Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher

  3. Comparing linear probability model coefficients across groups

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  4. An extensible analysable system model

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...

  5. Spaghetti Bridges: Modeling Linear Relationships

    Kroon, Cindy D.

    2016-01-01

    Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…

  6. Non-linear finite element modeling

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  7. Correlations and Non-Linear Probability Models

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  8. Extended Linear Models with Gaussian Priors

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  9. Linear mixed models in sensometrics

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  10. Linear causal modeling with structural equations

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  11. Statistical Tests for Mixed Linear Models

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  12. Matrix Tricks for Linear Statistical Models

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  13. Linear Analyses of Magnetohydrodynamic Richtmyer-Meshkov Instability in Cylindrical Geometry

    Bakhsh, Abeer

    2018-05-13

    We investigate the Richtmyer-Meshkov instability (RMI) that occurs when an incident shock impulsively accelerates the interface between two different fluids. RMI is important in many technological applications such as Inertial Confinement Fusion (ICF) and astrophysical phenomena such as supernovae. We consider RMI in the presence of the magnetic field in converging geometry through both simulations and analytical means in the framework of ideal magnetohydrodynamics (MHD). In this thesis, we perform linear stability analyses via simulations in the cylindrical geometry, which is of relevance to ICF. In converging geometry, RMI is usually followed by the Rayleigh-Taylor instability (RTI). We show that the presence of a magnetic field suppresses the instabilities. We study the influence of the strength of the magnetic field, perturbation wavenumbers and other relevant parameters on the evolution of the RM and RT instabilities. First, we perform linear stability simulations for a single interface between two different fluids in which the magnetic field is normal to the direction of the average motion of the density interface. The suppression of the instabilities is most evident for large wavenumbers and relatively strong magnetic fields strengths. The mechanism of suppression is the transport of vorticity away from the density interface by two Alfv ́en fronts. Second, we examine the case of an azimuthal magnetic field at the density interface. The most evident suppression of the instability at the interface is for large wavenumbers and relatively strong magnetic fields strengths. After the shock interacts with the interface, the emerging vorticity breaks up into waves traveling parallel and anti-parallel to the magnetic field. The interference as these waves propagate with alternating phase causing the perturbation growth rate of the interface to oscillate in time. Finally, we propose incompressible models for MHD RMI in the presence of normal or azimuthal magnetic

  14. Modeling digital switching circuits with linear algebra

    Thornton, Mitchell A

    2014-01-01

    Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf

  15. Updating Linear Schedules with Lowest Cost: a Linear Programming Model

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata

    2017-10-01

    Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.

  16. Linear control theory for gene network modeling.

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  17. LINEAR MODEL FOR NON ISOSCELES ABSORBERS.

    BERG,J.S.

    2003-05-12

    Previous analyses have assumed that wedge absorbers are triangularly shaped with equal angles for the two faces. In this case, to linear order, the energy loss depends only on the position in the direction of the face tilt, and is independent of the incoming angle. One can instead construct an absorber with entrance and exit faces facing rather general directions. In this case, the energy loss can depend on both the position and the angle of the particle in question. This paper demonstrates that and computes the effect to linear order.

  18. A linear model of ductile plastic damage

    Lemaitre, J.

    1983-01-01

    A three-dimensional model of isotropic ductile plastic damage based on a continuum damage variable on the effective stress concept and on thermodynamics is derived. As shown by experiments on several metals and alloys, the model, integrated in the case of proportional loading, is linear with respect to the accumulated plastic strain and shows a large influence of stress triaxiality [fr

  19. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  20. Ground Motion Models for Future Linear Colliders

    Seryi, Andrei

    2000-01-01

    Optimization of the parameters of a future linear collider requires comprehensive models of ground motion. Both general models of ground motion and specific models of the particular site and local conditions are essential. Existing models are not completely adequate, either because they are too general, or because they omit important peculiarities of ground motion. The model considered in this paper is based on recent ground motion measurements performed at SLAC and at other accelerator laboratories, as well as on historical data. The issues to be studied for the models to become more predictive are also discussed

  1. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  2. Modelling point patterns with linear structures

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  3. Modelling point patterns with linear structures

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  4. Optimal designs for linear mixture models

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  5. Optimal designs for linear mixture models

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  6. Diagnostics for Linear Models With Functional Responses

    Xu, Hongquan; Shen, Qing

    2005-01-01

    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  7. Linear control theory for gene network modeling.

    Yong-Jun Shin

    Full Text Available Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain and linear state-space (time domain can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  8. [From clinical judgment to linear regression model.

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  9. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  10. Modeling of Volatility with Non-linear Time Series Model

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  11. Thresholding projection estimators in functional linear models

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  12. Decomposed Implicit Models of Piecewise - Linear Networks

    J. Brzobohaty

    1992-05-01

    Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.

  13. From spiking neuron models to linear-nonlinear models.

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  14. Modelling and analysing oriented fibrous structures

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  15. Stochastic linear programming models, theory, and computation

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  16. Random effect selection in generalised linear models

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  17. The number of subjects per variable required in linear regression analyses

    P.C. Austin (Peter); E.W. Steyerberg (Ewout)

    2015-01-01

    textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression

  18. Externalizing Behaviour for Analysing System Models

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  19. Linear accelerator modeling: development and application

    Jameson, R.A.; Jule, W.D.

    1977-01-01

    Most of the parameters of a modern linear accelerator can be selected by simulating the desired machine characteristics in a computer code and observing how the parameters affect the beam dynamics. The code PARMILA is used at LAMPF for the low-energy portion of linacs. Collections of particles can be traced with a free choice of input distributions in six-dimensional phase space. Random errors are often included in order to study the tolerances which should be imposed during manufacture or in operation. An outline is given of the modifications made to the model, the results of experiments which indicate the validity of the model, and the use of the model to optimize the longitudinal tuning of the Alvarez linac

  20. Running vacuum cosmological models: linear scalar perturbations

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  1. Linear Parametric Model Checking of Timed Automata

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  2. Non Linear Analyses for the Evaluation of Seismic Behavior of Mixed R.C.-Masonry Structures

    Liberatore, Laura; Tocci, Cesare; Masiani, Renato

    2008-01-01

    In this work the seismic behavior of masonry buildings with mixed structural system, consisting of perimeter masonry walls and internal r.c. frames, is studied by means of non linear static (pushover) analyses. Several aspects, like the distribution of seismic action between masonry and r.c. elements, the local and global behavior of the structure, the crisis of the connections and the attainment of the ultimate strength of the whole structure are examined. The influence of some parameters, such as the masonry compressive and tensile strength, on the structural behavior is investigated. The numerical analyses are also repeated on a building in which the r.c. internal frames are replaced with masonry walls

  3. Model Selection with the Linear Mixed Model for Longitudinal Data

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  4. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  5. The number of subjects per variable required in linear regression analyses.

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Electron Model of Linear-Field FFAG

    Koscielniak, Shane R

    2005-01-01

    A fixed-field alternating-gradient accelerator (FFAG) that employs only linear-field elements ushers in a new regime in accelerator design and dynamics. The linear-field machine has the ability to compact an unprecedented range in momenta within a small component aperture. With a tune variation which results from the natural chromaticity, the beam crosses many strong, uncorrec-table, betatron resonances during acceleration. Further, relativistic particles in this machine exhibit a quasi-parabolic time-of-flight that cannot be addressed with a fixed-frequency rf system. This leads to a new concept of bucketless acceleration within a rotation manifold. With a large energy jump per cell, there is possibly strong synchro-betatron coupling. A few-MeV electron model has been proposed to demonstrate the feasibility of these untested acceleration features and to investigate them at length under a wide range of operating conditions. This paper presents a lattice optimized for a 1.3 GHz rf, initial technology choices f...

  7. Linear models in the mathematics of uncertainty

    Mordeson, John N; Clark, Terry D; Pham, Alex; Redmond, Michael A

    2013-01-01

    The purpose of this book is to present new mathematical techniques for modeling global issues. These mathematical techniques are used to determine linear equations between a dependent variable and one or more independent variables in cases where standard techniques such as linear regression are not suitable. In this book, we examine cases where the number of data points is small (effects of nuclear warfare), where the experiment is not repeatable (the breakup of the former Soviet Union), and where the data is derived from expert opinion (how conservative is a political party). In all these cases the data  is difficult to measure and an assumption of randomness and/or statistical validity is questionable.  We apply our methods to real world issues in international relations such as  nuclear deterrence, smart power, and cooperative threat reduction. We next apply our methods to issues in comparative politics such as successful democratization, quality of life, economic freedom, political stability, and fail...

  8. Generalized Linear Models in Vehicle Insurance

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  9. Linear time delay methods and stability analyses of the human spine. Effects of neuromuscular reflex response.

    Franklin, Timothy C; Granata, Kevin P; Madigan, Michael L; Hendricks, Scott L

    2008-08-01

    Linear stability methods were applied to a biomechanical model of the human musculoskeletal spine to investigate effects of reflex gain and reflex delay on stability. Equations of motion represented a dynamic 18 degrees-of-freedom rigid-body model with time-delayed reflexes. Optimal muscle activation levels were identified by minimizing metabolic power with the constraints of equilibrium and stability with zero reflex time delay. Muscle activation levels and associated muscle forces were used to find the delay margin, i.e., the maximum reflex delay for which the system was stable. Results demonstrated that stiffness due to antagonistic co-contraction necessary for stability declined with increased proportional reflex gain. Reflex delay limited the maximum acceptable proportional reflex gain, i.e., long reflex delay required smaller maximum reflex gain to avoid instability. As differential reflex gain increased, there was a small increase in acceptable reflex delay. However, differential reflex gain with values near intrinsic damping caused the delay margin to approach zero. Forward-dynamic simulations of the fully nonlinear time-delayed system verified the linear results. The linear methods accurately found the delay margin below which the nonlinear system was asymptotically stable. These methods may aid future investigations in the role of reflexes in musculoskeletal stability.

  10. Bayesian uncertainty analyses of probabilistic risk models

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  11. Nonlinear price impact from linear models

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  12. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.

    2009-01-01

    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  13. Siemens experience on linear and nonlinear analyses of out-of-phase BWR instabilities

    Kreuter, D.; Wehle, F.

    1995-01-01

    The Siemens design code STAIF has been applied extensively for linear analysis of BWR instabilities. The comparison between measurements and STAIF calculations for different plants under various conditions has shown good agreement for core-wide and regional instabilities. Based on the high quality of STAIF, the North German TUeV has decided to replace the licensing requirement of extensive stability measurements by predictive analyses with the code STAIF. Nonlinear stability analysis for beyond design boundary conditions with RAMONA has shown dryout during temporarily reversed flow at core inlet in case of core-wide oscillations. For large out-of-phase oscillations, dryout occurs already for small, still positive channel inlet flow. (orig.)

  14. Robust Linear Models for Cis-eQTL Analysis.

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  15. Linear mixed models a practical guide using statistical software

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  16. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  17. From linear to generalized linear mixed models: A case study in repeated measures

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  18. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  19. Evaluating the double Poisson generalized linear model.

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Petri Nets as Models of Linear Logic

    Engberg, Uffe Henrik; Winskel, Glynn

    1990-01-01

    The chief purpose of this paper is to appraise the feasibility of Girad's linear logic as a specification language for parallel processes. To this end we propose an interpretation of linear logic in Petri nets, with respect to which we investigate the expressive power of the logic...

  1. Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems.

    Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo

    2017-12-01

    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Linear approximation model network and its formation via ...

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  3. YALINA Booster subcritical assembly modeling and analyses

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  4. Linear regression crash prediction models : issues and proposed solutions.

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  5. Game Theory and its Relationship with Linear Programming Models ...

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.

  6. A Note on the Identifiability of Generalized Linear Mixed Models

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  7. Study of the critical behavior of the O(N) linear and nonlinear sigma models

    Graziani, F.R.

    1983-01-01

    A study of the large N behavior of both the O(N) linear and nonlinear sigma models is presented. The purpose is to investigate the relationship between the disordered (ordered) phase of the linear and nonlinear sigma models. Utilizing operator product expansions and stability analyses, it is shown that for 2 - (lambda/sub R/(M) is the dimensionless renormalized quartic coupling and lambda* is the IR fixed point) limit of the linear sigma model which yields the nonlinear sigma model. It is also shown that stable large N linear sigma models with lambda 0) and nonlinear models are trivial. This result (i.e., triviality) is well known but only for one and two component models. Interestingly enough, the lambda< d = 4 linear sigma model remains nontrivial and tachyonic free

  8. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  9. Explicit estimating equations for semiparametric generalized linear latent variable models

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  10. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  11. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Samar Hayat Khan Tareen

    2015-07-01

    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  12. Tried and True: Springing into Linear Models

    Darling, Gerald

    2012-01-01

    In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…

  13. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  14. Ordinal Log-Linear Models for Contingency Tables

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  15. Recent Updates to the GEOS-5 Linear Model

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  16. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Double generalized linear compound poisson models to insurance claims data

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  18. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  19. Thurstonian models for sensory discrimination tests as generalized linear models

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  20. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Masudul Islam

    2012-10-01

    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  1. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne

    2003-01-01

    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  2. Linear and non-linear autoregressive models for short-term wind speed forecasting

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  3. Linear collider signal of anomaly mediated supersymmetry breaking model

    Ghosh Dilip Kumar; Kundu, Anirban; Roy, Probir; Roy, Sourov

    2001-01-01

    Though the minimal model of anomaly mediated supersymmetry breaking has been significantly constrained by recent experimental and theoretical work, there are still allowed regions of the parameter space for moderate to large values of tan β. We show that these regions will be comprehensively probed in a √s = 1 TeV e + e - linear collider. Diagnostic signals to this end are studied by zeroing in on a unique and distinct feature of a large class of models in this genre: a neutral winolike Lightest Supersymmetric Particle closely degenerate in mass with a winolike chargino. The pair production processes e + e - → e tilde L ± e tilde L ± , e tilde R ± e tilde R ± , e tilde L ± e tilde R ± , ν tilde anti ν tilde, χ tilde 1 0 χ tilde 2 0 , χ tilde 2 0 χ tilde 2 0 are all considered at √s = 1 TeV corresponding to the proposed TESLA linear collider in two natural categories of mass ordering in the sparticle spectra. The signals analysed comprise multiple combinations of fast charged leptons (any of which can act as the trigger) plus displaced vertices X D (any of which can be identified by a heavy ionizing track terminating in the detector) and/or associated soft pions with characteristic momentum distributions. (author)

  4. Applicability of linear and non-linear potential flow models on a Wavestar float

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino

    2017-01-01

    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  5. A linear model of population dynamics

    Lushnikov, A. A.; Kagan, A. I.

    2016-08-01

    The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).

  6. A test for the parameters of multiple linear regression models ...

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  7. Modeling Non-Linear Material Properties in Composite Materials

    2016-06-28

    Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS ...systems are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions

  8. Reliability modelling and simulation of switched linear system ...

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  9. Multivariate statistical modelling based on generalized linear models

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  10. Approximating chiral quark models with linear σ-models

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  11. Analysing Feature Model Changes using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  12. Latent log-linear models for handwritten digit classification.

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  13. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  14. Ion acoustic waves in pair-ion plasma: Linear and nonlinear analyses

    Saeed, R.; Mushtaq, A.

    2009-01-01

    Linear and nonlinear properties of low frequency ion acoustic wave (IAW) in pair-ion plasma in the presence of electrons are investigated. The dispersion relation and Kadomtsev-Petviashvili equation for linear/nonlinear IAW are derived from sets of hydrodynamic equations where the ion pairs are inertial while electrons are Boltzmannian. The dispersion curves for various concentrations of electrons are discussed and compared with experimental results. The predicted linear IAW propagates at the same frequencies as those of the experimentally observed IAW if n e0 ∼10 4 cm -3 . It is found that nonlinear profile of the ion acoustic solitary waves is significantly affected by the percentage ratio of electron number density and temperature. It is also determined that rarefactive solitary waves can propagate in this system. It is hoped that the results presented in this study would be helpful in understanding the salient features of the finite amplitude localized ion acoustic solitary pulses in a laboratory fullerene plasma.

  15. Linear Regression Models for Estimating True Subsurface ...

    47

    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  16. Numerical modelling in non linear fracture mechanics

    Viggo Tvergaard

    2007-07-01

    Full Text Available Some numerical studies of crack propagation are based on using constitutive models that accountfor damage evolution in the material. When a critical damage value has been reached in a materialpoint, it is natural to assume that this point has no more carrying capacity, as is done numerically in the elementvanish technique. In the present review this procedure is illustrated for micromechanically based materialmodels, such as a ductile failure model that accounts for the nucleation and growth of voids to coalescence, and a model for intergranular creep failure with diffusive growth of grain boundary cavities leading to micro-crack formation. The procedure is also illustrated for low cycle fatigue, based on continuum damage mechanics. In addition, the possibility of crack growth predictions for elastic-plastic solids using cohesive zone models to represent the fracture process is discussed.

  17. An electronic probe micro-analyser. A linear scan device; Microanalyseur a sonde electronique. Dispositif de balayage lineaire

    Kirianenko, A; Maurice, F [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    The Castaing electronic probe micro-analyser makes possible static analysis at successive points. For two years this apparatus has been equipped by its constructor with an automatic device for surface scanning. In order to increase the micro-analyser's efficiency a 'linear' scan device has been incorporated making it possible to obtain semi-quantitative analyses very rapidly. (authors) [French] Le microanalyseur a sonde electronique de Castaing permet l'analyse statique en des points successifs. Depuis deux ans, cet appareil a ete equipe par son constructeur d'un dispositif de balayage automatique 'surface'. Afin d'augmenter l'efficacite du microanalyaeur, on a adapte un dispositif de balayage 'lineaire' qui permet d'obtenir tres rapidement des analyses semi-quantitative. (auteurs)

  18. Model Order Reduction for Non Linear Mechanics

    Pinillo, Rubén

    2017-01-01

    Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.

  19. Identification of Influential Points in a Linear Regression Model

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  20. Heterotic sigma models and non-linear strings

    Hull, C.M.

    1986-01-01

    The two-dimensional supersymmetric non-linear sigma models are examined with respect to the heterotic string. The paper was presented at the workshop on :Supersymmetry and its applications', Cambridge, United Kingdom, 1985. The non-linear sigma model with Wess-Zumino-type term, the coupling of the fermionic superfields to the sigma model, super-conformal invariance, and the supersymmetric string, are all discussed. (U.K.)

  1. On-line control models for the Stanford Linear Collider

    Sheppard, J.C.; Helm, R.H.; Lee, M.J.; Woodley, M.D.

    1983-03-01

    Models for computer control of the SLAC three-kilometer linear accelerator and damping rings have been developed as part of the control system for the Stanford Linear Collider. Some of these models have been tested experimentally and implemented in the control program for routine linac operations. This paper will describe the development and implementation of these models, as well as some of the operational results

  2. Modelling and Analyses of Embedded Systems Design

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  3. Introduction to statistical modelling: linear regression.

    Lunt, Mark

    2015-07-01

    In many studies we wish to assess how a range of variables are associated with a particular outcome and also determine the strength of such relationships so that we can begin to understand how these factors relate to each other at a population level. Ultimately, we may also be interested in predicting the outcome from a series of predictive factors available at, say, a routine clinic visit. In a recent article in Rheumatology, Desai et al. did precisely that when they studied the prediction of hip and spine BMD from hand BMD and various demographic, lifestyle, disease and therapy variables in patients with RA. This article aims to introduce the statistical methodology that can be used in such a situation and explain the meaning of some of the terms employed. It will also outline some common pitfalls encountered when performing such analyses. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Linear algebraic analyses of structures with one predominant type of anomalous scatterer

    Karle, J.

    1989-01-01

    Further studies have been made of the information content of the exact linear equations for analyzing anomalous dispersion data in one-wavelength experiments. The case of interest concerns structures containing atoms that essentially do not scatter anomalously and one type of anomalously scattering atoms. For this case, there are three alternative ways of writing the equations. The alternative sets of equations and the transformations for transforming one set into the other are given explicitly. Comparison calculations were made with different sets of equations. Isomorphous replacement information is readily introduced into the calculations and the advantage of doing so is clearly illustrated by the results. Another aspect of the potential of the exact linear algebraic theory is its application to multiple-wavelength experiments. Successful applications of the latter have been made by several collaborative groups of investigators. (orig.)

  5. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  6. Generalized Linear Models with Applications in Engineering and the Sciences

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  7. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  8. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  9. Overview of analytical models for the design of linear and planar motors

    Jansen, J.W.; Smeets, J.P.C.; Overboom, T.T.; Rovers, J.M.M.; Lomonova, E.A.

    2014-01-01

    In this paper, an overview of analytical techniques for the modeling of linear and planar permanent-magnet motors is given. These models can be used complementary to finite element analyses for fast evaluations of topologies, but they are indispensable for the design of magnetically levitated planar

  10. Modelling a linear PM motor including magnetic saturation

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  11. Application of the simplex method of linear programming model to ...

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  12. Linear and non-linear analyses of Conner's Continuous Performance Test-II discriminate adult patients with attention deficit hyperactivity disorder from patients with mood and anxiety disorders.

    Fasmer, Ole Bernt; Mjeldheim, Kristin; Førland, Wenche; Hansen, Anita L; Syrstad, Vigdis Elin Giæver; Oedegaard, Ketil J; Berle, Jan Øystein

    2016-08-11

    Attention Deficit Hyperactivity Disorder (ADHD) is a heterogeneous disorder. Therefore it is important to look for factors that can contribute to better diagnosis and classification of these patients. The aims of the study were to characterize adult psychiatric out-patients with a mixture of mood, anxiety and attentional problems using an objective neuropsychological test of attention combined with an assessment of mood instability. Newly referred patients (n = 99; aged 18-65 years) requiring diagnostic evaluation of ADHD, mood or anxiety disorders were recruited, and were given a comprehensive diagnostic evaluation including the self-report form of the cyclothymic temperament scale and Conner's Continuous Performance Test II (CPT-II). In addition to the traditional measures from this test we have extracted raw data and analysed time series using linear and non-linear mathematical methods. Fifty patients fulfilled criteria for ADHD, while 49 did not, and were given other psychiatric diagnoses (clinical controls). When compared to the clinical controls the ADHD patients had more omission and commission errors, and higher reaction time variability. Analyses of response times showed higher values for skewness in the ADHD patients, and lower values for sample entropy and symbolic dynamics. Among the ADHD patients 59 % fulfilled criteria for a cyclothymic temperament, and this group had higher reaction time variability and lower scores on complexity than the group without this temperament. The CPT-II is a useful instrument in the assessment of ADHD in adult patients. Additional information from this test was obtained by analyzing response times using linear and non-linear methods, and this showed that ADHD patients with a cyclothymic temperament were different from those without this temperament.

  13. Applications of Historical Analyses in Combat Modelling

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  14. Linear mixed-effects modeling approach to FMRI group analysis.

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  15. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  16. Radiobiological analyse based on cell cluster models

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  17. Linear approximation model network and its formation via ...

    niques, an alternative `linear approximation model' (LAM) network approach is .... network is LPV, existing LTI theory is difficult to apply (Kailath 1980). ..... Beck J V, Arnold K J 1977 Parameter estimation in engineering and science (New York: ...

  18. Sphaleron in a non-linear sigma model

    Sogo, Kiyoshi; Fujimoto, Yasushi.

    1989-08-01

    We present an exact classical saddle point solution in a non-linear sigma model. It has a topological charge 1/2 and mediates the vacuum transition. The quantum fluctuations and the transition rate are also examined. (author)

  19. On D-branes from gauged linear sigma models

    Govindarajan, S.; Jayaraman, T.; Sarkar, T.

    2001-01-01

    We study both A-type and B-type D-branes in the gauged linear sigma model by considering worldsheets with boundary. The boundary conditions on the matter and vector multiplet fields are first considered in the large-volume phase/non-linear sigma model limit of the corresponding Calabi-Yau manifold, where we find that we need to add a contact term on the boundary. These considerations enable to us to derive the boundary conditions in the full gauged linear sigma model, including the addition of the appropriate boundary contact terms, such that these boundary conditions have the correct non-linear sigma model limit. Most of the analysis is for the case of Calabi-Yau manifolds with one Kaehler modulus (including those corresponding to hypersurfaces in weighted projective space), though we comment on possible generalisations

  20. Optimization for decision making linear and quadratic models

    Murty, Katta G

    2010-01-01

    While maintaining the rigorous linear programming instruction required, Murty's new book is unique in its focus on developing modeling skills to support valid decision-making for complex real world problems, and includes solutions to brand new algorithms.

  1. Study of linear induction motor characteristics : the Mosebach model

    1976-05-31

    This report covers the Mosebach theory of the double-sided linear induction motor, starting with the ideallized model and accompanying assumptions, and ending with relations for thrust, airgap power, and motor efficiency. Solutions of the magnetic in...

  2. Study of linear induction motor characteristics : the Oberretl model

    1975-05-30

    The Oberretl theory of the double-sided linear induction motor (LIM) is examined, starting with the idealized model and accompanying assumptions, and ending with relations for predicted thrust, airgap power, and motor efficiency. The effect of varyin...

  3. Linear Analyses of Magnetohydrodynamic Richtmyer-Meshkov Instability in Cylindrical Geometry

    Bakhsh, Abeer

    2018-01-01

    inverse Laplace transform. The incompressible models show that the magnetic field suppresses the RMI, and the mechanism of this suppression depends on the orientation of the initially applied magnetic field. The incompressible model agrees reasonably well

  4. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  5. Optimization Research of Generation Investment Based on Linear Programming Model

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  6. Generalized linear mixed models modern concepts, methods and applications

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  7. Unification of three linear models for the transient visual system

    Brinker, den A.C.

    1989-01-01

    Three different linear filters are considered as a model describing the experimentally determined triphasic impulse responses of discs. These impulse responses arc associated with the transient visual system. Each model reveals a different feature of the system. Unification of the models is

  8. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  9. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  10. Linearized models for a new magnetic control in MAST

    Artaserse, G., E-mail: giovanni.artaserse@enea.it [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.

  11. Linearized models for a new magnetic control in MAST

    Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.

    2013-01-01

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops

  12. H∞ /H2 model reduction through dilated linear matrix inequalities

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2012-01-01

    This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field{N}$. Arb......This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...

  13. Non-linear finite element analyses of wide plate fracture mechanics experiments

    Harrop, L.P.; Gibson, S.

    1988-06-01

    A series of centre-cracked, wide plate fracture mechanics tests is being conducted with plates made from 0.36% carbon steel. This report gives an account of post-test finite element analyses performed to compare with the results of one of these tests (designated CSTP4) and a pre-test analysis of the next test which has a slightly different geometry (CSTP5). The plates are relatively thick (75mm) and have a width of 1.62m. The finite element analyses use a two-dimensional plane stress mesh. The work shows good agreement between the post-test analysis results and the overall experimental results for CSTP4. It is not expected that the analysis results will be accurate within the dimensions of the process zone ahead of the crack tip; the mesh is not sufficient for this. A vital ingredient in attaining the good overall agreement is the representation of the actual stress-strain curve of the material. The predicted response of test CSTP5 is markedly different from that of CSTP4 even though the only change is the increase in the height of the plate. In particular the shape and size of the plastic zone ahead of the crack tip is quite different in the two tests at the same nominal remote applied load. (author)

  14. Numerical insight into the seismic behavior of eight masonry towers in Northern Italy: FE pushover vs non-linear dynamic analyses

    Milani, Gabriele, E-mail: milani@stru.polimi.it, E-mail: gabriele.milani@polimi.it; Valente, Marco [Department of Architecture, Built Environment and Construction Engineering (ABC), Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2015-12-31

    This study presents some FE results regarding the behavior under horizontal loads of eight existing masonry towers located in the North-East of Italy. The towers, albeit unique for geometric and architectural features, show some affinities which justify a comparative analysis, as for instance the location and the similar masonry material. Their structural behavior under horizontal loads is therefore influenced by geometrical issues, such as slenderness, walls thickness, perforations, irregularities, presence of internal vaults, etc., all features which may be responsible for a peculiar output. The geometry of the towers is deduced from both existing available documentation and in-situ surveys. On the basis of such geometrical data, a detailed 3D realistic mesh is conceived, with a point by point characterization of each single geometric element. The FE models are analysed under seismic loads acting along geometric axes of the plan section, both under non-linear static (pushover) and non-linear dynamic excitation assumptions. A damage-plasticity material model exhibiting softening in both tension and compression, already available in the commercial code Abaqus, is used for masonry. Pushover analyses are performed with both G1 and G2 horizontal loads distribution, according to Italian code requirements, along X+/− and Y+/− directions. Non-linear dynamic analyses are performed along both X and Y directions with a real accelerogram scaled to different peak ground accelerations. Some few results are presented in this paper. It is found that the results obtained with pushover analyses reasonably well fit expensive non-linear dynamic simulations, with a slightly less conservative trend.

  15. Numerical insight into the seismic behavior of eight masonry towers in Northern Italy: FE pushover vs non-linear dynamic analyses

    Milani, Gabriele; Valente, Marco

    2015-01-01

    This study presents some FE results regarding the behavior under horizontal loads of eight existing masonry towers located in the North-East of Italy. The towers, albeit unique for geometric and architectural features, show some affinities which justify a comparative analysis, as for instance the location and the similar masonry material. Their structural behavior under horizontal loads is therefore influenced by geometrical issues, such as slenderness, walls thickness, perforations, irregularities, presence of internal vaults, etc., all features which may be responsible for a peculiar output. The geometry of the towers is deduced from both existing available documentation and in-situ surveys. On the basis of such geometrical data, a detailed 3D realistic mesh is conceived, with a point by point characterization of each single geometric element. The FE models are analysed under seismic loads acting along geometric axes of the plan section, both under non-linear static (pushover) and non-linear dynamic excitation assumptions. A damage-plasticity material model exhibiting softening in both tension and compression, already available in the commercial code Abaqus, is used for masonry. Pushover analyses are performed with both G1 and G2 horizontal loads distribution, according to Italian code requirements, along X+/− and Y+/− directions. Non-linear dynamic analyses are performed along both X and Y directions with a real accelerogram scaled to different peak ground accelerations. Some few results are presented in this paper. It is found that the results obtained with pushover analyses reasonably well fit expensive non-linear dynamic simulations, with a slightly less conservative trend

  16. Non-linear Growth Models in Mplus and SAS

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  17. Variance Function Partially Linear Single-Index Models1.

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  18. Investigation of pile foundations of nuclear power plants with help of non linear analyses

    Diaz, B.E.; Schulz, M.; Costa, E.; Vaz, L.E.

    1984-01-01

    A few important 1300 MW PWR nuclear power plants have been built over pile foundations. The design requirements of Nuclear Power Plants prescribe accurate investigation of the as built conditions of the foundation. This study must take into account the actual concrete strength existent among and along the pile shafts of the foundation. In order to simulate the structural response of the foundation up to the failure, a non linear analysis must be performed. In this paper the required computer analysis procedures will be described. It can be verified that the redistribution of the internal forces in this highly hyperstatic soil-structure system can be of two types. The total applied forces over the foundation are redistributed among the piles and for each pile itself a local redistribution of forces takes place along the pile shaft. This type of analysis allows an accurate investigation of the actual safety margin existent in the pile foundation, based on the actual as built conditions of the construction. (Author) [pt

  19. Comparison between linear quadratic and early time dose models

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  20. Phylogenetic mixtures and linear invariants for equal input models.

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  1. Non-linear calibration models for near infrared spectroscopy

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  2. Estimation and variable selection for generalized additive partial linear models

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  3. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  4. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Matrix model and time-like linear dila ton matter

    Takayanagi, Tadashi

    2004-01-01

    We consider a matrix model description of the 2d string theory whose matter part is given by a time-like linear dilaton CFT. This is equivalent to the c=1 matrix model with a deformed, but very simple Fermi surface. Indeed, after a Lorentz transformation, the corresponding 2d spacetime is a conventional linear dila ton background with a time-dependent tachyon field. We show that the tree level scattering amplitudes in the matrix model perfectly agree with those computed in the world-sheet theory. The classical trajectories of fermions correspond to the decaying D-boranes in the time-like linear dilaton CFT. We also discuss the ground ring structure. Furthermore, we study the properties of the time-like Liouville theory by applying this matrix model description. We find that its ground ring structure is very similar to that of the minimal string. (author)

  6. Vortices, semi-local vortices in gauged linear sigma model

    Kim, Namkwon

    1998-11-01

    We consider the static (2+1)D gauged linear sigma model. By analyzing the governing system of partial differential equations, we investigate various aspects of the model. We show the existence of energy finite vortices under a partially broken symmetry on R 2 with the necessary condition suggested by Y. Yang. We also introduce generalized semi-local vortices and show the existence of energy finite semi-local vortices under a certain condition. The vacuum manifold for the semi-local vortices turns out to be graded. Besides, with a special choice of a representation, we show that the O(3) sigma model of which target space is nonlinear is a singular limit of the gauged linear sigma model of which target space is linear. (author)

  7. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  8. Linear mixed models a practical guide using statistical software

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  9. Asymptotic analysis of a stochastic non-linear nuclear reactor model

    Rodriguez, M.A.; Sancho, J.M.

    1986-01-01

    The asymptotic behaviour of a stochastic non-linear nuclear reactor modelled by a master equation is analysed in two different limits: the thermodynamic limit and the zero-neutron-source limit. In the first limit a finite steady neutron density is obtained. The second limit predicts the neutron extinction. The interplay between these two limits is studied for different situations. (author)

  10. Scale of association: hierarchical linear models and the measurement of ecological systems

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  11. Stochastic Finite Element Analysis of Non-Linear Structures Modelled by Plasticity Theory

    Frier, Christian; Sørensen, John Dalsgaard

    2003-01-01

    A Finite Element Reliability Method (FERM) is introduced to perform reliability analyses on two-dimensional structures in plane stress, modeled by non-linear plasticity theory. FERM is a coupling between the First Order Reliability Method (FORM) and the Finite Element Method (FEM). FERM can be us...

  12. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  13. Optical linear algebra processors - Noise and error-source modeling

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  14. Optical linear algebra processors: noise and error-source modeling.

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  15. CONTRIBUTIONS TO THE FINITE ELEMENT MODELING OF LINEAR ULTRASONIC MOTORS

    Oana CHIVU

    2013-05-01

    Full Text Available The present paper is concerned with the main modeling elements as produced by means of thefinite element method of linear ultrasonic motors. Hence, first the model is designed and then a modaland harmonic analysis are carried out in view of outlining the main outcomes

  16. Linear and Nonlinear Career Models: Metaphors, Paradigms, and Ideologies.

    Buzzanell, Patrice M.; Goldzwig, Steven R.

    1991-01-01

    Examines the linear or bureaucratic career models (dominant in career research, metaphors, paradigms, and ideologies) which maintain career myths of flexibility and individualized routes to success in organizations incapable of offering such versatility. Describes nonlinear career models which offer suggestive metaphors for re-visioning careers…

  17. Study of Piezoelectric Vibration Energy Harvester with non-linear conditioning circuit using an integrated model

    Manzoor, Ali; Rafique, Sajid; Usman Iftikhar, Muhammad; Mahmood Ul Hassan, Khalid; Nasir, Ali

    2017-08-01

    Piezoelectric vibration energy harvester (PVEH) consists of a cantilever bimorph with piezoelectric layers pasted on its top and bottom, which can harvest power from vibrations and feed to low power wireless sensor nodes through some power conditioning circuit. In this paper, a non-linear conditioning circuit, consisting of a full-bridge rectifier followed by a buck-boost converter, is employed to investigate the issues of electrical side of the energy harvesting system. An integrated mathematical model of complete electromechanical system has been developed. Previously, researchers have studied PVEH with sophisticated piezo-beam models but employed simplistic linear circuits, such as resistor, as electrical load. In contrast, other researchers have worked on more complex non-linear circuits but with over-simplified piezo-beam models. Such models neglect different aspects of the system which result from complex interactions of its electrical and mechanical subsystems. In this work, authors have integrated the distributed parameter-based model of piezo-beam presented in literature with a real world non-linear electrical load. Then, the developed integrated model is employed to analyse the stability of complete energy harvesting system. This work provides a more realistic and useful electromechanical model having realistic non-linear electrical load unlike the simplistic linear circuit elements employed by many researchers.

  18. Low-energy limit of the extended Linear Sigma Model

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  19. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Bernstein, Andrey; Dall' Anese, Emiliano

    2017-05-26

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  20. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  1. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  2. Modelling and measurement of a moving magnet linear compressor performance

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul

    2014-01-01

    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  3. The minimal linear σ model for the Goldstone Higgs

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.

    2016-01-01

    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  4. A variational formulation for linear models in coupled dynamic thermoelasticity

    Feijoo, R.A.; Moura, C.A. de.

    1981-07-01

    A variational formulation for linear models in coupled dynamic thermoelasticity which quite naturally motivates the design of a numerical scheme for the problem, is studied. When linked to regularization or penalization techniques, this algorithm may be applied to more general models, namely, the ones that consider non-linear constraints associated to variational inequalities. The basic postulates of Mechanics and Thermodynamics as well as some well-known mathematical techniques are described. A thorough description of the algorithm implementation with the finite-element method is also provided. Proofs for existence and uniqueness of solutions and for convergence of the approximations are presented, and some numerical results are exhibited. (Author) [pt

  5. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  6. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  7. Functional linear models for association analysis of quantitative traits.

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  8. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  9. Practical likelihood analysis for spatial generalized linear mixed models

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  10. Stochastic modeling of mode interactions via linear parabolized stability equations

    Ran, Wei; Zare, Armin; Hack, M. J. Philipp; Jovanovic, Mihailo

    2017-11-01

    Low-complexity approximations of the Navier-Stokes equations have been widely used in the analysis of wall-bounded shear flows. In particular, the parabolized stability equations (PSE) and Floquet theory have been employed to capture the evolution of primary and secondary instabilities in spatially-evolving flows. We augment linear PSE with Floquet analysis to formally treat modal interactions and the evolution of secondary instabilities in the transitional boundary layer via a linear progression. To this end, we leverage Floquet theory by incorporating the primary instability into the base flow and accounting for different harmonics in the flow state. A stochastic forcing is introduced into the resulting linear dynamics to model the effect of nonlinear interactions on the evolution of modes. We examine the H-type transition scenario to demonstrate how our approach can be used to model nonlinear effects and capture the growth of the fundamental and subharmonic modes observed in direct numerical simulations and experiments.

  11. Cross-beam energy transfer: On the accuracy of linear stationary models in the linear kinetic regime

    Debayle, A.; Masson-Laborde, P.-E.; Ruyer, C.; Casanova, M.; Loiseau, P.

    2018-05-01

    We present an extensive numerical study by means of particle-in-cell simulations of the energy transfer that occurs during the crossing of two laser beams. In the linear regime, when ions are not trapped in the potential well induced by the laser interference pattern, a very good agreement is obtained with a simple linear stationary model, provided the laser intensity is sufficiently smooth. These comparisons include different plasma compositions to cover the strong and weak Landau damping regimes as well as the multispecies case. The correct evaluation of the linear Landau damping at the phase velocity imposed by the laser interference pattern is essential to estimate the energy transfer rate between the laser beams, once the stationary regime is reached. The transient evolution obtained in kinetic simulations is also analysed by means of a full analytical formula that includes 3D beam energy exchange coupled with the ion acoustic wave response. Specific attention is paid to the energy transfer when the laser presents small-scale inhomogeneities. In particular, the energy transfer is reduced when the laser inhomogeneities are comparable with the Landau damping characteristic length of the ion acoustic wave.

  12. Linear modeling of possible mechanisms for parkinson tremor generation

    Lohnberg, P.

    1978-01-01

    The power of Parkinson tremor is expressed in terms of possibly changed frequency response functions between relevant variables in the neuromuscular system. The derivation starts out from a linear loopless equivalent model of mechanisms for general tremor generation. Hypothetical changes in this

  13. Current algebra of classical non-linear sigma models

    Forger, M.; Laartz, J.; Schaeper, U.

    1992-01-01

    The current algebra of classical non-linear sigma models on arbitrary Riemannian manifolds is analyzed. It is found that introducing, in addition to the Noether current j μ associated with the global symmetry of the theory, a composite scalar field j, the algebra closes under Poisson brackets. (orig.)

  14. Mathematical modelling and linear stability analysis of laser fusion cutting

    Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich

    2016-01-01

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  15. Non Linear signa models probing the string structure

    Abdalla, E.

    1987-01-01

    The introduction of a term depending on the extrinsic curvature to the string action, and related non linear sigma models defined on a symmetric space SO(D)/SO(2) x SO(d-2) is descussed . Coupling to fermions are also treated. (author) [pt

  16. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  17. Penalized Estimation in Large-Scale Generalized Linear Array Models

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  18. Expressions for linearized perturbations in ideal-fluid cosmological models

    Ratra, B.

    1988-01-01

    We present closed-form solutions of the relativistic linear perturbation equations (in synchronous gauge) that govern the evolution of inhomogeneities in homogeneous, spatially flat, ideal-fluid, cosmological models. These expressions, which are valid for irregularities on any scale, allow one to analytically interpolate between the known approximate solutions which are valid at early times and at late times

  19. S-AMP for non-linear observation models

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2015-01-01

    Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  20. Plane answers to complex questions the theory of linear models

    Christensen, Ronald

    1987-01-01

    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  1. A simulation model of a coordinated decentralized linear supply chain

    Ashayeri, Jalal; Cannella, S.; Lopez Campos, M.; Miranda, P.A.

    2015-01-01

    This paper presents a simulation-based study of a coordinated, decentralized linear supply chain (SC) system. In the proposed model, any supply tier considers its successors as part of its inventory system and generates replenishment orders on the basis of its partners’ operational information. We

  2. Mathematical modelling and linear stability analysis of laser fusion cutting

    Hermanns, Torsten; Schulz, Wolfgang [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Vossen, Georg [Niederrhein University of Applied Sciences, Chair for Applied Mathematics and Numerical Simulations, Reinarzstr.. 49, 47805 Krefeld (Germany); Thombansen, Ulrich [RWTH Aachen University, Chair for Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)

    2016-06-08

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  3. Performances Of Estimators Of Linear Models With Autocorrelated ...

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  4. Performances of estimators of linear auto-correlated error model ...

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  5. A non-linear dissipative model of magnetism

    Durand, P.; Paidarová, Ivana

    2010-01-01

    Roč. 89, č. 6 (2010), s. 67004 ISSN 1286-4854 R&D Projects: GA AV ČR IAA100400501 Institutional research plan: CEZ:AV0Z40400503 Keywords : non-linear dissipative model of magnetism * thermodynamics * physical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry http://epljournal.edpsciences.org/

  6. Modeling and verifying non-linearities in heterodyne displacement interferometry

    Cosijns, S.J.A.G.; Haitjema, H.; Schellekens, P.H.J.

    2002-01-01

    The non-linearities in a heterodyne laser interferometer system occurring from the phase measurement system of the interferometer andfrom non-ideal polarization effects of the optics are modeled into one analytical expression which includes the initial polarization state ofthe laser source, the

  7. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  8. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  9. Identifiability Results for Several Classes of Linear Compartment Models.

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  10. Finite element modeling of nanotube structures linear and non-linear models

    Awang, Mokhtar; Muhammad, Ibrahim Dauda

    2016-01-01

    This book presents a new approach to modeling carbon structures such as graphene and carbon nanotubes using finite element methods, and addresses the latest advances in numerical studies for these materials. Based on the available findings, the book develops an effective finite element approach for modeling the structure and the deformation of grapheme-based materials. Further, modeling processing for single-walled and multi-walled carbon nanotubes is demonstrated in detail.

  11. Linear Dynamics Model for Steam Cooled Fast Power Reactors

    Vollmer, H

    1968-04-15

    A linear analytical dynamic model is developed for steam cooled fast power reactors. All main components of such a plant are investigated on a general though relatively simple basis. The model is distributed in those parts concerning the core but lumped as to the external plant components. Coolant is considered as compressible and treated by the actual steam law. Combined use of analogue and digital computer seems most attractive.

  12. Deterministic operations research models and methods in linear optimization

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  13. One-loop dimensional reduction of the linear σ model

    Malbouisson, A.P.C.; Silva-Neto, M.B.; Svaiter, N.F.

    1997-05-01

    We perform the dimensional reduction of the linear σ model at one-loop level. The effective of the reduced theory obtained from the integration over the nonzero Matsubara frequencies is exhibited. Thermal mass and coupling constant renormalization constants are given, as well as the thermal renormalization group which controls the dependence of the counterterms on the temperature. We also recover, for the reduced theory, the vacuum instability of the model for large N. (author)

  14. Artificial Neural Network versus Linear Models Forecasting Doha Stock Market

    Yousif, Adil; Elfaki, Faiz

    2017-12-01

    The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.

  15. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    Song, Xiaolei; Alkhalifah, Tariq Ali

    2012-01-01

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen's parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  16. Development of ITER 3D neutronics model and nuclear analyses

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  17. Non-linear sigma model on the fuzzy supersphere

    Kurkcuoglu, Seckin

    2004-01-01

    In this note we develop fuzzy versions of the supersymmetric non-linear sigma model on the supersphere S (2,2) . In hep-th/0212133 Bott projectors have been used to obtain the fuzzy C P 1 model. Our approach utilizes the use of supersymmetric extensions of these projectors. Here we obtain these (super)-projectors and quantize them in a fashion similar to the one given in hep-th/0212133. We discuss the interpretation of the resulting model as a finite dimensional matrix model. (author)

  18. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  19. Modeling and analysis of linear hyperbolic systems of balance laws

    Bartecki, Krzysztof

    2016-01-01

    This monograph focuses on the mathematical modeling of distributed parameter systems in which mass/energy transport or wave propagation phenomena occur and which are described by partial differential equations of hyperbolic type. The case of linear (or linearized) 2 x 2 hyperbolic systems of balance laws is considered, i.e., systems described by two coupled linear partial differential equations with two variables representing physical quantities, depending on both time and one-dimensional spatial variable. Based on practical examples of a double-pipe heat exchanger and a transportation pipeline, two typical configurations of boundary input signals are analyzed: collocated, wherein both signals affect the system at the same spatial point, and anti-collocated, in which the input signals are applied to the two different end points of the system. The results of this book emerge from the practical experience of the author gained during his studies conducted in the experimental installation of a heat exchange cente...

  20. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  1. A penalized framework for distributed lag non-linear models.

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  2. General mirror pairs for gauged linear sigma models

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  3. General mirror pairs for gauged linear sigma models

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  4. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  5. Linear models for joint association and linkage QTL mapping

    Fernando Rohan L

    2009-09-01

    Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.

  6. Sensitivity and uncertainty analyses for performance assessment modeling

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  7. Linear models of income patterns in consumer demand for foods and evaluation of its elasticity

    Pavel Syrovátka

    2005-01-01

    Full Text Available The paper is focused on the use of the linear constructions for developing of Engel’s demand models in the field of the food-consumer demand. In the theoretical part of the paper, the linear approximations of this demand models are analysed on the bases of the linear interpolation. In the same part of this text, the hyperbolic elasticity function was defined for the linear Engel model. The behaviour of the hyperbolic elasticity function and its properties were consequently investigated too. The behaviour of the determined elasticity function was investigated according to the values of the intercept point and the direction parameter in the original linear Engel model. The obtained theoretical findings were tested using the real data of Czech Statistical Office. The developed linear Engel model was explicitly dynamised, because the achieved database was formed into the time series. With respect to the two variables definitions of the hyperbolic function in the theoretical part of the text, the determined dynamic model of the Engel demand for food was transformed into the form with parametric intercept point:ret* = At + 0.0946 · rmt*,where the values of absolute member are defined as:At = 1773.0973 + 9.3064 · t – 0.3023 · t2; (t = 1, 2, ... 32.The value of At in the parametric linear model of Engel consumer demand for food was during the observed period (1995–2002 always positive. Thus, the hyperbolic elasticity function achieved the elasticity coefficients from the interval:ηt ∈〈+0; +1.Within quantitative analysis of Engel demand for food in the Czech Republic during the given time period, it was founded, that income elasticity of food expenditures of the average Czech household was moved between +0.4080 and +0.4511. The Czech-household demand for food is thus income inelastic with the normal income reactions.

  8. A Graphical User Interface to Generalized Linear Models in MATLAB

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  9. MAGDM linear-programming models with distinct uncertain preference structures.

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  10. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric

    2004-01-01

    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  11. Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region

    Mazurek, Grzegorz; Iwański, Marek

    2017-10-01

    Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105 controlled strain mode. The fixed strain level was set at 25με to guarantee that the stiffness modulus of the asphalt concrete would be tested in a linear viscoelasticity range. The master curve was formed using the time-temperature superposition principle (TTSP). The stiffness modulus of asphalt concrete was determined at temperatures 10°C, 20°C and 40°C and at loading times (frequency) of 0.1, 0.3, 1, 3, 10, 20 Hz. The model parameters were fitted to the rheological models using the original programs based on the nonlinear least squares sum method. All the rheological models under analysis were found to be capable of predicting changes in the stiffness modulus of the reference asphalt concrete to satisfactory accuracy. In the cases of the fractional model and the generalized Maxwell model, their accuracy depends on a number of elements in series. The best fit was registered for Bahia and co-workers model, generalized Maxwell model and fractional model. As for predicting the phase angle parameter, the largest discrepancies between experimental and modelled results were obtained using the fractional model. Except the Burgers model, the model matching quality was

  12. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  13. Linear Model for Optimal Distributed Generation Size Predication

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  14. A non-linear model of economic production processes

    Ponzi, A.; Yasutomi, A.; Kaneko, K.

    2003-06-01

    We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.

  15. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  16. NON-LINEAR FINITE ELEMENT MODELING OF DEEP DRAWING PROCESS

    Hasan YILDIZ

    2004-03-01

    Full Text Available Deep drawing process is one of the main procedures used in different branches of industry. Finding numerical solutions for determination of the mechanical behaviour of this process will save time and money. In die surfaces, which have complex geometries, it is hard to determine the effects of parameters of sheet metal forming. Some of these parameters are wrinkling, tearing, and determination of the flow of the thin sheet metal in the die and thickness change. However, the most difficult one is determination of material properties during plastic deformation. In this study, the effects of all these parameters are analyzed before producing the dies. The explicit non-linear finite element method is chosen to be used in the analysis. The numerical results obtained for non-linear material and contact models are also compared with the experiments. A good agreement between the numerical and the experimental results is obtained. The results obtained for the models are given in detail.

  17. Dynamic generalized linear models for monitoring endemic diseases

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2016-01-01

    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  18. Estimation and Inference for Very Large Linear Mixed Effects Models

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  19. Using Quartile-Quartile Lines as Linear Models

    Gordon, Sheldon P.

    2015-01-01

    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…

  20. NON-LINEAR MODELING OF THE RHIC INTERACTION REGIONS

    TOMAS, R.; FISCHER, W.; JAIN, A.; LUO, Y.; PILAT, F.

    2004-01-01

    For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability

  1. Electromagnetic axial anomaly in a generalized linear sigma model

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  2. Analysing the temporal dynamics of model performance for hydrological models

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  3. Comparison of Linear Prediction Models for Audio Signals

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  4. A quasi-linear gyrokinetic transport model for tokamak plasmas

    Casati, A.

    2009-10-01

    After a presentation of some basics around nuclear fusion, this research thesis introduces the framework of the tokamak strategy to deal with confinement, hence the main plasma instabilities which are responsible for turbulent transport of energy and matter in such a system. The author also briefly introduces the two principal plasma representations, the fluid and the kinetic ones. He explains why the gyro-kinetic approach has been preferred. A tokamak relevant case is presented in order to highlight the relevance of a correct accounting of the kinetic wave-particle resonance. He discusses the issue of the quasi-linear response. Firstly, the derivation of the model, called QuaLiKiz, and its underlying hypotheses to get the energy and the particle turbulent flux are presented. Secondly, the validity of the quasi-linear response is verified against the nonlinear gyro-kinetic simulations. The saturation model that is assumed in QuaLiKiz, is presented and discussed. Then, the author qualifies the global outcomes of QuaLiKiz. Both the quasi-linear energy and the particle flux are compared to the expectations from the nonlinear simulations, across a wide scan of tokamak relevant parameters. Therefore, the coupling of QuaLiKiz within the integrated transport solver CRONOS is presented: this procedure allows the time-dependent transport problem to be solved, hence the direct application of the model to the experiment. The first preliminary results regarding the experimental analysis are finally discussed

  5. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  6. Linear theory for filtering nonlinear multiscale systems with model error.

    Berry, Tyrus; Harlim, John

    2014-07-08

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online , as part of a filtering

  7. Technical note: A linear model for predicting δ13 Cprotein.

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  8. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  9. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  10. Neutron stars in non-linear coupling models

    Taurines, Andre R.; Vasconcellos, Cesar A.Z.; Malheiro, Manuel; Chiapparini, Marcelo

    2001-01-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, ∼ 0.72M s un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  11. Neutron stars in non-linear coupling models

    Taurines, Andre R.; Vasconcellos, Cesar A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil); Malheiro, Manuel [Universidade Federal Fluminense, Niteroi, RJ (Brazil); Chiapparini, Marcelo [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    2001-07-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, {approx} 0.72M{sub s}un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  12. Modelling of Rotational Capacity in Reinforced Linear Elements

    Hestbech, Lars; Hagsten, Lars German; Fisker, Jakob

    2011-01-01

    on the rotational capacity of the plastic hinges. The documentation of ductility can be a difficult task as modelling of rotational capacity in plastic hinges of frames is not fully developed. On the basis of the Theory of Plasticity a model is developed to determine rotational capacity in plastic hinges in linear......The Capacity Design Method forms the basis of several seismic design codes. This design philosophy allows plastic deformations in order to decrease seismic demands in structures. However, these plastic deformations must be localized in certain zones where ductility requirements can be documented...... reinforced concrete elements. The model is taking several important parameters into account. Empirical values is avoided which is considered an advantage compared to previous models. Furthermore, the model includes force variations in the reinforcement due to moment distributions and shear as well...

  13. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  14. Network Traffic Monitoring Using Poisson Dynamic Linear Models

    Merl, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-05-09

    In this article, we discuss an approach for network forensics using a class of nonstationary Poisson processes with embedded dynamic linear models. As a modeling strategy, the Poisson DLM (PoDLM) provides a very flexible framework for specifying structured effects that may influence the evolution of the underlying Poisson rate parameter, including diurnal and weekly usage patterns. We develop a novel particle learning algorithm for online smoothing and prediction for the PoDLM, and demonstrate the suitability of the approach to real-time deployment settings via a new application to computer network traffic monitoring.

  15. On the chiral phase transition in the linear sigma model

    Tran Huu Phat; Nguyen Tuan Anh; Le Viet Hoa

    2003-01-01

    The Cornwall- Jackiw-Tomboulis (CJT) effective action for composite operators at finite temperature is used to investigate the chiral phase transition within the framework of the linear sigma model as the low-energy effective model of quantum chromodynamics (QCD). A new renormalization prescription for the CJT effective action in the Hartree-Fock (HF) approximation is proposed. A numerical study, which incorporates both thermal and quantum effect, shows that in this approximation the phase transition is of first order. However, taking into account the higher-loop diagrams contribution the order of phase transition is unchanged. (author)

  16. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    Liang, Faming; Song, Qifan; Yu, Kai

    2013-01-01

    criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening

  17. Use of flow models to analyse loss of coolant accidents

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  18. State space model extraction of thermohydraulic systems – Part I: A linear graph approach

    Uren, K.R.; Schoor, G. van

    2013-01-01

    Thermohydraulic simulation codes are increasingly making use of graphical design interfaces. The user can quickly and easily design a thermohydraulic system by placing symbols on the screen resembling system components. These components can then be connected to form a system representation. Such system models may then be used to obtain detailed simulations of the physical system. Usually this kind of simulation models are too complex and not ideal for control system design. Therefore, a need exists for automated techniques to extract lumped parameter models useful for control system design. The goal of this first paper, in a two part series, is to propose a method that utilises a graphical representation of a thermohydraulic system, and a lumped parameter modelling approach, to extract state space models. In this methodology each physical domain of the thermohydraulic system is represented by a linear graph. These linear graphs capture the interaction between all components within and across energy domains – hydraulic, thermal and mechanical. These linear graphs are analysed using a graph-theoretic approach to derive reduced order state space models. These models capture the dominant dynamics of the thermohydraulic system and are ideal for control system design purposes. The proposed state space model extraction method is demonstrated by considering a U-tube system. A non-linear state space model is extracted representing both the hydraulic and thermal domain dynamics of the system. The simulated state space model is compared with a Flownex ® model of the U-tube. Flownex ® is a validated systems thermal-fluid simulation software package. - Highlights: • A state space model extraction methodology based on graph-theoretic concepts. • An energy-based approach to consider multi-domain systems in a common framework. • Allow extraction of transparent (white-box) state space models automatically. • Reduced order models containing only independent state

  19. Application of linearized model to the stability analysis of the pressurized water reactor

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  20. The Overgeneralization of Linear Models among University Students' Mathematical Productions: A Long-Term Study

    Esteley, Cristina B.; Villarreal, Monica E.; Alagia, Humberto R.

    2010-01-01

    Over the past several years, we have been exploring and researching a phenomenon that occurs among undergraduate students that we called extension of linear models to non-linear contexts or overgeneralization of linear models. This phenomenon appears when some students use linear representations in situations that are non-linear. In a first phase,…

  1. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  2. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  3. Predicting Madura cattle growth curve using non-linear model

    Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.

    2018-03-01

    Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (plogistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.

  4. A non-linear model of information seeking behaviour

    Allen E. Foster

    2005-01-01

    Full Text Available The results of a qualitative, naturalistic, study of information seeking behaviour are reported in this paper. The study applied the methods recommended by Lincoln and Guba for maximising credibility, transferability, dependability, and confirmability in data collection and analysis. Sampling combined purposive and snowball methods, and led to a final sample of 45 inter-disciplinary researchers from the University of Sheffield. In-depth semi-structured interviews were used to elicit detailed examples of information seeking. Coding of interview transcripts took place in multiple iterations over time and used Atlas-ti software to support the process. The results of the study are represented in a non-linear Model of Information Seeking Behaviour. The model describes three core processes (Opening, Orientation, and Consolidation and three levels of contextual interaction (Internal Context, External Context, and Cognitive Approach, each composed of several individual activities and attributes. The interactivity and shifts described by the model show information seeking to be non-linear, dynamic, holistic, and flowing. The paper concludes by describing the whole model of behaviours as analogous to an artist's palette, in which activities remain available throughout information seeking. A summary of key implications of the model and directions for further research are included.

  5. Effect Displays in R for Generalised Linear Models

    John Fox

    2003-07-01

    Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.

  6. Global numerical modeling of magnetized plasma in a linear device

    Magnussen, Michael Løiten

    Understanding the turbulent transport in the plasma-edge in fusion devices is of utmost importance in order to make precise predictions for future fusion devices. The plasma turbulence observed in linear devices shares many important features with the turbulence observed in the edge of fusion dev...... with simulations performed at different ionization levels, using a simple model for plasma interaction with neutrals. It is found that the steady state and the saturated state of the system bifurcates when the neutral interaction dominates the electron-ion collisions.......Understanding the turbulent transport in the plasma-edge in fusion devices is of utmost importance in order to make precise predictions for future fusion devices. The plasma turbulence observed in linear devices shares many important features with the turbulence observed in the edge of fusion...... devices, and are easier to diagnose due to lower temperatures and a better access to the plasma. In order to gain greater insight into this complex turbulent behavior, numerical simulations of plasma in a linear device are performed in this thesis. Here, a three-dimensional drift-fluid model is derived...

  7. Predicting birth weight with conditionally linear transformation models.

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  8. Wavefront Sensing for WFIRST with a Linear Optical Model

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  9. Multiple Linear Regression Model for Estimating the Price of a ...

    Ghana Mining Journal ... In the modeling, the Ordinary Least Squares (OLS) normality assumption which could introduce errors in the statistical analyses was dealt with by log transformation of the data, ensuring the data is normally ... The resultant MLRM is: Ŷi MLRM = (X'X)-1X'Y(xi') where X is the sample data matrix.

  10. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    Song, Xiaolei

    2012-11-04

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen\\'s parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  11. A componential model of human interaction with graphs: 1. Linear regression modeling

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  12. Linearized vector radiative transfer model MCC++ for a spherical atmosphere

    Postylyakov, O.V.

    2004-01-01

    Application of radiative transfer models has shown that optical remote sensing requires extra characteristics of radiance field in addition to the radiance intensity itself. Simulation of spectral measurements, analysis of retrieval errors and development of retrieval algorithms are in need of derivatives of radiance with respect to atmospheric constituents under investigation. The presented vector spherical radiative transfer model MCC++ was linearized, which allows the calculation of derivatives of all elements of the Stokes vector with respect to the volume absorption coefficient simultaneously with radiance calculation. The model MCC++ employs Monte Carlo algorithm for radiative transfer simulation and takes into account aerosol and molecular scattering, gas and aerosol absorption, and Lambertian surface albedo. The model treats a spherically symmetrical atmosphere. Relation of the estimated derivatives with other forms of radiance derivatives: the weighting functions used in gas retrieval and the air mass factors used in the DOAS retrieval algorithms, is obtained. Validation of the model against other radiative models is overviewed. The computing time of the intensity for the MCC++ model is about that for radiative models treating sphericity of the atmosphere approximately and is significantly shorter than that for the full spherical models used in the comparisons. The simultaneous calculation of all derivatives (i.e. with respect to absorption in all model atmosphere layers) and the intensity is only 1.2-2 times longer than the calculation of the intensity only

  13. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  14. Exactly soluble two-state quantum models with linear couplings

    Torosov, B T; Vitanov, N V

    2008-01-01

    A class of exact analytic solutions of the time-dependent Schroedinger equation is presented for a two-state quantum system coherently driven by a nonresonant external field. The coupling is a linear function of time with a finite duration and the detuning is constant. Four special models are considered in detail, namely the shark, double-shark, tent and zigzag models. The exact solution is derived by rotation of the Landau-Zener propagator at an angle of π/4 and is expressed in terms of Weber's parabolic cylinder function. Approximations for the transition probabilities are derived for all four models by using the asymptotics of the Weber function; these approximations demonstrate various effects of physical interest for each model

  15. Linear models for multivariate, time series, and spatial data

    Christensen, Ronald

    1991-01-01

    This is a companion volume to Plane Answers to Complex Questions: The Theory 0/ Linear Models. It consists of six additional chapters written in the same spirit as the last six chapters of the earlier book. Brief introductions are given to topics related to linear model theory. No attempt is made to give a comprehensive treatment of the topics. Such an effort would be futile. Each chapter is on a topic so broad that an in depth discussion would require a book-Iength treatment. People need to impose structure on the world in order to understand it. There is a limit to the number of unrelated facts that anyone can remem­ ber. If ideas can be put within a broad, sophisticatedly simple structure, not only are they easier to remember but often new insights become avail­ able. In fact, sophisticatedly simple models of the world may be the only ones that work. I have often heard Arnold Zellner say that, to the best of his knowledge, this is true in econometrics. The process of modeling is fundamental to understand...

  16. SVM models for analysing the headstreams of mine water inrush

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  17. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  18. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  19. Modelling non-linear effects of dark energy

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  20. Performance of neutron kinetics models for ADS transient analyses

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  1. Dynamics of heart rate variability analysed through nonlinear and linear dynamics is already impaired in young type 1 diabetic subjects.

    Souza, Naiara M; Giacon, Thais R; Pacagnelli, Francis L; Barbosa, Marianne P C R; Valenti, Vitor E; Vanderlei, Luiz C M

    2016-10-01

    Autonomic diabetic neuropathy is one of the most common complications of type 1 diabetes mellitus, and studies using heart rate variability to investigate these individuals have shown inconclusive results regarding autonomic nervous system activation. Aims To investigate the dynamics of heart rate in young subjects with type 1 diabetes mellitus through nonlinear and linear methods of heart rate variability. We evaluated 20 subjects with type 1 diabetes mellitus and 23 healthy control subjects. We obtained the following nonlinear indices from the recurrence plot: recurrence rate (REC), determinism (DET), and Shanon entropy (ES), and we analysed indices in the frequency (LF and HF in ms2 and normalised units - nu - and LF/HF ratio) and time domains (SDNN and RMSSD), through analysis of 1000 R-R intervals, captured by a heart rate monitor. There were reduced values (p<0.05) for individuals with type 1 diabetes mellitus compared with healthy subjects in the following indices: DET, REC, ES, RMSSD, SDNN, LF (ms2), and HF (ms2). In relation to the recurrence plot, subjects with type 1 diabetes mellitus demonstrated lower recurrence and greater variation in their plot, inter-group and intra-group, respectively. Young subjects with type 1 diabetes mellitus have autonomic nervous system behaviour that tends to randomness compared with healthy young subjects. Moreover, this behaviour is related to reduced sympathetic and parasympathetic activity of the autonomic nervous system.

  2. Spatial generalised linear mixed models based on distances.

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  3. Linear system identification via backward-time observer models

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  4. Linear mixing model applied to AVHRR LAC data

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  5. Accelerating transient simulation of linear reduced order models.

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  6. Behavioral modeling of the dominant dynamics in input-output transfer of linear(ized) circuits

    Beelen, T.G.J.; Maten, ter E.J.W.; Sihaloho, H.J.; Eijndhoven, van S.J.L.

    2010-01-01

    We present a powerful procedure for determining both the dominant dynamics of the inputoutput transfer and the corresponding most influential circuit parameters of a linear(ized) circuit. The procedure consists of several steps in which a specific (sub)problem is solved and its solution is used in

  7. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  8. Non Linear Modelling and Control of Hydraulic Actuators

    B. Šulc

    2002-01-01

    Full Text Available This paper deals with non-linear modelling and control of a differential hydraulic actuator. The nonlinear state space equations are derived from basic physical laws. They are more powerful than the transfer function in the case of linear models, and they allow the application of an object oriented approach in simulation programs. The effects of all friction forces (static, Coulomb and viscous have been modelled, and many phenomena that are usually neglected are taken into account, e.g., the static term of friction, the leakage between the two chambers and external space. Proportional Differential (PD and Fuzzy Logic Controllers (FLC have been applied in order to make a comparison by means of simulation. Simulation is performed using Matlab/Simulink, and some of the results are compared graphically. FLC is tuned in a such way that it produces a constant control signal close to its maximum (or minimum, where possible. In the case of PD control the occurrence of peaks cannot be avoided. These peaks produce a very high velocity that oversteps the allowed values.

  9. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  10. A linear model for flow over complex terrain

    Frank, H P [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark)

    1999-03-01

    A linear flow model similar to WA{sup s}P or LINCOM has been developed. Major differences are an isentropic temperature equation which allows internal gravity waves, and vertical advection of the shear of the mean flow. The importance of these effects are illustrated by examples. Resource maps are calculated from a distribution of geostrophic winds and stratification for Pyhaetunturi Fell in northern Finland and Acqua Spruzza in Italy. Stratification becomes important if the inverse Froude number formulated with the width of the hill becomes of order one or greater. (au) EU-JOULE-3. 16 refs.

  11. Linear-quadratic model predictions for tumor control probability

    Yaes, R.J.

    1987-01-01

    Sigmoid dose-response curves for tumor control are calculated from the linear-quadratic model parameters α and Β, obtained from human epidermoid carcinoma cell lines, and are much steeper than the clinical dose-response curves for head and neck cancers. One possible explanation is the presence of small radiation-resistant clones arising from mutations in an initially homogeneous tumor. Using the mutation theory of Delbruck and Luria and of Goldie and Coldman, the authors discuss the implications of such radiation-resistant clones for clinical radiation therapy

  12. Inventory model using bayesian dynamic linear model for demand forecasting

    Marisol Valencia-Cárdenas

    2014-12-01

    Full Text Available An important factor of manufacturing process is the inventory management of terminated product. Constantly, industry is looking for better alternatives to establish an adequate plan of production and stored quantities, with optimal cost, getting quantities in a time horizon, which permits to define resources and logistics with anticipation, needed to distribute products on time. Total absence of historical data, required by many statistical models to forecast, demands the search for other kind of accurate techniques. This work presents an alternative that not only permits to forecast, in an adjusted way, but also, to provide optimal quantities to produce and store with an optimal cost, using Bayesian statistics. The proposal is illustrated with real data. Palabras clave: estadística bayesiana, optimización, modelo de inventarios, modelo lineal dinámico bayesiano. Keywords: Bayesian statistics, opti

  13. On the analysis of clonogenic survival data: Statistical alternatives to the linear-quadratic model

    Unkel, Steffen; Belka, Claus; Lauber, Kirsten

    2016-01-01

    the extraction of scores of radioresistance, which displayed significant correlations with the estimated parameters of the regression models. Undoubtedly, LQ regression is a robust method for the analysis of clonogenic survival data. Nevertheless, alternative approaches including non-linear regression and multivariate techniques such as cluster analysis and principal component analysis represent versatile tools for the extraction of parameters and/or scores of the cellular response towards ionizing irradiation with a more intuitive biological interpretation. The latter are highly informative for correlation analyses with other types of data, including functional genomics data that are increasingly beinggenerated

  14. Estimating mass of σ-meson and study on application of the linear σ-model

    Ding Yibing; Li Xin; Li Xueqian; Liu Xiang; Shen Hong; Shen Pengnian; Wang Guoli; Zeng Xiaoqiang

    2004-01-01

    Whether the σ-meson (f 0 (600)) exists as a real particle is a long-standing problem in both particle physics and nuclear physics. In this work, we analyse the deuteron binding energy in the linear σ-model and by fitting the data, we are able to determine the range of m σ and also investigate applicability of the linear σ-model for the interaction between hadrons in the energy region of MeVs. Our result shows that the best fit to the data of the deuteron binding energy and others advocates a narrow range for the σ-meson mass as 520 ≤ m σ ≤ 580 MeV and the concrete values depend on the input parameters such as the couplings. Inversely by fitting the experimental data, one can set constraints on the couplings and the other relevant phenomenological parameters in the model

  15. Phenomenology of non-minimal supersymmetric models at linear colliders

    Porto, Stefano

    2015-06-01

    The focus of this thesis is on the phenomenology of several non-minimal supersymmetric models in the context of future linear colliders (LCs). Extensions of the minimal supersymmetric Standard Model (MSSM) may accommodate the observed Higgs boson mass at about 125 GeV in a more natural way than the MSSM, with a richer phenomenology. We consider both F-term extensions of the MSSM, as for instance the non-minimal supersymmetric Standard Model (NMSSM), as well as D-terms extensions arising at low energies from gauge extended supersymmetric models. The NMSSM offers a solution to the μ-problem with an additional gauge singlet supermultiplet. The enlarged neutralino sector of the NMSSM can be accurately studied at a LC and used to distinguish the model from the MSSM. We show that exploiting the power of the polarised beams of a LC can be used to reconstruct the neutralino and chargino sector and eventually distinguish the NMSSM even considering challenging scenarios that resemble the MSSM. Non-decoupling D-terms extensions of the MSSM can raise the tree-level Higgs mass with respect to the MSSM. This is done through additional contributions to the Higgs quartic potential, effectively generated by an extended gauge group. We study how this can happen and we show how these additional non-decoupling D-terms affect the SM-like Higgs boson couplings to fermions and gauge bosons. We estimate how the deviations from the SM couplings can be spotted at the Large Hadron Collider (LHC) and at the International Linear Collider (ILC), showing how the ILC would be suitable for the model identication. Since our results prove that a linear collider is a fundamental machine for studying supersymmetry phenomenology at a high level of precision, we argue that also a thorough comprehension of the physics at the interaction point (IP) of a LC is needed. Therefore, we finally consider the possibility of observing intense electromagnetic field effects and nonlinear quantum electrodynamics

  16. Model-based Recursive Partitioning for Subgroup Analyses

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  17. Non-Linear Slosh Damping Model Development and Validation

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  18. Non linear permanent magnets modelling with the finite element method

    Chavanne, J.; Meunier, G.; Sabonnadiere, J.C.

    1989-01-01

    In order to perform the calculation of permanent magnets with the finite element method, it is necessary to take into account the anisotropic behaviour of hard magnetic materials (Ferrites, NdFeB, SmCo5). In linear cases, the permeability of permanent magnets is a tensor. This one is fully described with the permeabilities parallel and perpendicular to the easy axis of the magnet. In non linear cases, the model uses a texture function which represents the distribution of the local easy axis of the cristallytes of the magnet. This function allows a good representation of the angular dependance of the coercitive field of the magnet. As a result, it is possible to express the magnetic induction B and the tensor as functions of the field and the texture parameter. This model has been implemented in the software FLUX3D where the tensor is used for the Newton-Raphson procedure. 3D demagnetization of a ferrite magnet by a NdFeB magnet is a suitable representative example. They analyze the results obtained for an ideally oriented ferrite magnet and a real one using a measured texture parameter

  19. Analyses of Lattice Traffic Flow Model on a Gradient Highway

    Gupta Arvind Kumar; Redhu Poonam; Sharma Sapna

    2014-01-01

    The optimal current difference lattice hydrodynamic model is extended to investigate the traffic flow dynamics on a unidirectional single lane gradient highway. The effect of slope on uphill/downhill highway is examined through linear stability analysis and shown that the slope significantly affects the stability region on the phase diagram. Using nonlinear stability analysis, the Burgers, Korteweg-deVries (KdV) and modified Korteweg-deVries (mKdV) equations are derived in stable, metastable and unstable region, respectively. The effect of reaction coefficient is examined and concluded that it plays an important role in suppressing the traffic jams on a gradient highway. The theoretical findings have been verified through numerical simulation which confirm that the slope on a gradient highway significantly influence the traffic dynamics and traffic jam can be suppressed efficiently by considering the optimal current difference effect in the new lattice model. (nuclear physics)

  20. Linear versus quadratic portfolio optimization model with transaction cost

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  1. Probabilistic model of ligaments and tendons: Quasistatic linear stretching

    Bontempi, M.

    2009-03-01

    Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers’ structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.

  2. Linear mixing model applied to coarse resolution satellite data

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  3. Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics

    Wang, John T.

    2010-01-01

    The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.

  4. Locally supersymmetric D=3 non-linear sigma models

    Wit, B. de; Tollsten, A.K.; Nicolai, H.

    1993-01-01

    We study non-linear sigma models with N local supersymmetries in three space-time dimensions. For N=1 and 2 the target space of these models is riemannian or Kaehler, respectively. All N>2 theories are associated with Einstein spaces. For N=3 the target space is quaternionic, while for N=4 it generally decomposes, into two separate quaternionic spaces, associated with inequivalent supermultiplets. For N=5, 6, 8 there is a unique (symmetric) space for any given number of supermultiplets. Beyond that there are only theories based on a single supermultiplet for N=9, 10, 12 and 16, associated with coset spaces with the exceptional isometry groups F 4(-20) , E 6(-14) , E 7(-5) and E 8(+8) , respectively. For N=3 and N ≥ 5 the D=2 theories obtained by dimensional reduction are two-loop finite. (orig.)

  5. Synthetic Domain Theory and Models of Linear Abadi & Plotkin Logic

    Møgelberg, Rasmus Ejlers; Birkedal, Lars; Rosolini, Guiseppe

    2008-01-01

    Plotkin suggested using a polymorphic dual intuitionistic/linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L. Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can...... reason using parametricity and, for example, solve a large class of domain equations, as suggested by Plotkin.In this paper, we show how an interpretation of a strict version of Bierman, Pitts and Russo's language Lily into synthetic domain theory presented by Simpson and Rosolini gives rise...... to a parametric LAPL-structure. This adds to the evidence that the notion of LAPL-structure is a general notion, suitable for treating many different parametric models, and it provides formal proofs of consequences of parametricity expected to hold for the interpretation. Finally, we show how these results...

  6. Methodology and Applications in Non-linear Model-based Geostatistics

    Christensen, Ole Fredslund

    that are approximately Gaussian. Parameter estimation and prediction for the transformed Gaussian model is studied. In some cases a transformation cannot possibly render the data Gaussian. A methodology for analysing such data was introduced by Diggle, Tawn and Moyeed (1998): The generalised linear spatial model...... priors for Bayesian inference is discussed. Procedures for parameter estimation and prediction are studied. Theoretical properties of Markov chain Monte Carlo algorithms are investigated, and different algorithms are compared. In addition, the thesis contains a manual for an R-package, geoRglmm, which...

  7. Modeling hard clinical end-point data in economic analyses.

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  8. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  9. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  10. Direction of Effects in Multiple Linear Regression Models.

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  11. Linear model applied to the evaluation of pharmaceutical stability data

    Renato Cesar Souza

    2013-09-01

    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  12. Fourth standard model family neutrino at future linear colliders

    Ciftci, A.K.; Ciftci, R.; Sultansoy, S.

    2005-01-01

    It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac (ν 4 ) and Majorana (N 1 ) neutrinos at future linear colliders with √(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e + e - →ν 4 ν 4 (N 1 N 1 ) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels (ν 4 (N 1 )→μ ± W ± ) provide cleanest signature at e + e - colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at √(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures

  13. Influence of the void fraction in the linear reactivity model

    Castillo, J.A.; Ramirez, J.R.; Alonso, G.

    2003-01-01

    The linear reactivity model allows the multicycle analysis in pressurized water reactors in a simple and quick way. In the case of the Boiling water reactors the void fraction it varies axially from 0% of voids in the inferior part of the fuel assemblies until approximately 70% of voids to the exit of the same ones. Due to this it is very important the determination of the average void fraction during different stages of the reactor operation to predict the burnt one appropriately of the same ones to inclination of the pattern of linear reactivity. In this work a pursuit is made of the profile of power for different steps of burnt of a typical operation cycle of a Boiling water reactor. Starting from these profiles it builds an algorithm that allows to determine the voids profile and this way to obtain the average value of the same one. The results are compared against those reported by the CM-PRESTO code that uses another method to carry out this calculation. Finally, the range in which is the average value of the void fraction during a typical cycle is determined and an estimate of the impact that it would have the use of this value in the prediction of the reactivity produced by the fuel assemblies is made. (Author)

  14. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    H. Vazquez-Leal

    2014-01-01

    Full Text Available We present a homotopy continuation method (HCM for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.

  15. Vector generalized linear and additive models with an implementation in R

    Yee, Thomas W

    2015-01-01

    This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable.    The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...

  16. Characteristics and Properties of a Simple Linear Regression Model

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  17. A simple non-linear model of immune response

    Gutnikov, Sergei; Melnikov, Yuri

    2003-01-01

    It is still unknown why the adaptive immune response in the natural immune system based on clonal proliferation of lymphocytes requires interaction of at least two different cell types with the same antigen. We present a simple mathematical model illustrating that the system with separate types of cells for antigen recognition and patogen destruction provides more robust adaptive immunity than the system where just one cell type is responsible for both recognition and destruction. The model is over-simplified as we did not have an intention of describing the natural immune system. However, our model provides a tool for testing the proposed approach through qualitative analysis of the immune system dynamics in order to construct more sophisticated models of the immune systems that exist in the living nature. It also opens a possibility to explore specific features of highly non-linear dynamics in nature-inspired computational paradigms like artificial immune systems and immunocomputing . We expect this paper to be of interest not only for mathematicians but also for biologists; therefore we made effort to explain mathematics in sufficient detail for readers without professional mathematical background

  18. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  19. Individual and Collective Analyses of the Genesis of Student Reasoning Regarding the Invertible Matrix Theorem in Linear Algebra

    Wawro, Megan Jean

    2011-01-01

    In this study, I considered the development of mathematical meaning related to the Invertible Matrix Theorem (IMT) for both a classroom community and an individual student over time. In this particular linear algebra course, the IMT was a core theorem in that it connected many concepts fundamental to linear algebra through the notion of…

  20. DISTING: A web application for fast algorithmic computation of alternative indistinguishable linear compartmental models.

    Davidson, Natalie R; Godfrey, Keith R; Alquaddoomi, Faisal; Nola, David; DiStefano, Joseph J

    2017-05-01

    We describe and illustrate use of DISTING, a novel web application for computing alternative structurally identifiable linear compartmental models that are input-output indistinguishable from a postulated linear compartmental model. Several computer packages are available for analysing the structural identifiability of such models, but DISTING is the first to be made available for assessing indistinguishability. The computational algorithms embedded in DISTING are based on advanced versions of established geometric and algebraic properties of linear compartmental models, embedded in a user-friendly graphic model user interface. Novel computational tools greatly speed up the overall procedure. These include algorithms for Jacobian matrix reduction, submatrix rank reduction, and parallelization of candidate rank computations in symbolic matrix analysis. The application of DISTING to three postulated models with respectively two, three and four compartments is given. The 2-compartment example is used to illustrate the indistinguishability problem; the original (unidentifiable) model is found to have two structurally identifiable models that are indistinguishable from it. The 3-compartment example has three structurally identifiable indistinguishable models. It is found from DISTING that the four-compartment example has five structurally identifiable models indistinguishable from the original postulated model. This example shows that care is needed when dealing with models that have two or more compartments which are neither perturbed nor observed, because the numbering of these compartments may be arbitrary. DISTING is universally and freely available via the Internet. It is easy to use and circumvents tedious and complicated algebraic analysis previously done by hand. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A Non-linear Stochastic Model for an Office Building with Air Infiltration

    Thavlov, Anders; Madsen, Henrik

    2015-01-01

    This paper presents a non-linear heat dynamic model for a multi-room office building with air infiltration. Several linear and non-linear models, with and without air infiltration, are investigated and compared. The models are formulated using stochastic differential equations and the model...

  2. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

    Sorana D. BOLBOACA

    2011-12-01

    Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

  3. Modeling and analysis of linearized wheel-rail contact dynamics

    Soomro, Z.

    2014-01-01

    The dynamics of the railway vehicles are nonlinear and depend upon several factors including vehicle speed, normal load and adhesion level. The presence of contaminants on the railway track makes them unpredictable too. Therefore in order to develop an effective control strategy it is important to analyze the effect of each factor on dynamic response thoroughly. In this paper a linearized model of a railway wheel-set is developed and is later analyzed by varying the speed and adhesion level by keeping the normal load constant. A wheel-set is the wheel-axle assembly of a railroad car. Patch contact is the study of the deformation of solids that touch each other at one or more points. (author)

  4. Human visual modeling and image deconvolution by linear filtering

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.

    1978-01-01

    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  5. Convergence diagnostics for Eigenvalue problems with linear regression model

    Shi, Bo; Petrovic, Bojan

    2011-01-01

    Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)

  6. A Dynamic Linear Modeling Approach to Public Policy Change

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  7. Baryon and meson phenomenology in the extended Linear Sigma Model

    Giacosa, Francesco; Habersetzer, Anja; Teilab, Khaled; Eshraim, Walaa; Divotgey, Florian; Olbrich, Lisa; Gallas, Susanna; Wolkanowski, Thomas; Janowski, Stanislaus; Heinz, Achim; Deinet, Werner; Rischke, Dirk H. [Institute for Theoretical Physics, J. W. Goethe University, Max-von-Laue-Str. 1, 60438 Frankfurt am Main (Germany); Kovacs, Peter; Wolf, Gyuri [Institute for Particle and Nuclear Physics, Wigner Research Center for Physics, Hungarian Academy of Sciences, H-1525 Budapest (Hungary); Parganlija, Denis [Institute for Theoretical Physics, Vienna University of Technology, Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2014-07-01

    The vacuum phenomenology obtained within the so-called extended Linear Sigma Model (eLSM) is presented. The eLSM Lagrangian is constructed by including from the very beginning vector and axial-vector d.o.f., and by requiring dilatation invariance and chiral symmetry. After a general introduction of the approach, particular attention is devoted to the latest results. In the mesonic sector the strong decays of the scalar and the pseudoscalar glueballs, the weak decays of the tau lepton into vector and axial-vector mesons, and the description of masses and decays of charmed mesons are shown. In the baryonic sector the omega production in proton-proton scattering and the inclusion of baryons with strangeness are described.

  8. A 1024 channel analyser of model FH 465

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  9. Non Abelian T-duality in Gauged Linear Sigma Models

    Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; Santos-Silva, Roberto

    2018-04-01

    Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM's as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they depend in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.

  10. A comparison of linear interpolation models for iterative CT reconstruction.

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

  11. Applications of one-dimensional models in simplified inelastic analyses

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  12. Capturing spike variability in noisy Izhikevich neurons using point process generalized linear models

    Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.

    2018-01-01

    current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...

  13. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  14. A non-equilibrium neutral model for analysing cultural change.

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Linear models for sound from supersonic reacting mixing layers

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  16. Linear programming model can explain respiration of fermentation products

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan

    2018-01-01

    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the “Warburg effect”. The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited. PMID:29415045

  17. Linear programming model can explain respiration of fermentation products.

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan; Boley, Daniel

    2018-01-01

    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the "Warburg effect". The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited.

  18. Transport coefficients from SU(3) Polyakov linearmodel

    Tawfik, A.; Diab, A.

    2015-01-01

    In the mean field approximation, the grand potential of SU(3) Polyakov linearmodel (PLSM) is analyzed for the order parameter of the light and strange chiral phase-transitions, σ l and σ s , respectively, and for the deconfinement order parameters φ and φ*. Furthermore, the subtracted condensate Δ l,s and the chiral order-parameters M b are compared with lattice QCD calculations. By using the dynamical quasiparticle model (DQPM), which can be considered as a system of noninteracting massive quasiparticles, we have evaluated the decay width and the relaxation time of quarks and gluons. In the framework of LSM and with Polyakov loop corrections included, the interaction measure Δ/T 4 , the specific heat c v and speed of sound squared c s 2 have been determined, as well as the temperature dependence of the normalized quark number density n q /T 3 and the quark number susceptibilities χ q /T 2 at various values of the baryon chemical potential. The electric and heat conductivity, σ e and κ, and the bulk and shear viscosities normalized to the thermal entropy, ζ/s and η/s, are compared with available results of lattice QCD calculations.

  19. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    Li, Yehua

    2010-06-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  20. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  1. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  2. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    Li, Yehua; Wang, Naisyin; Carroll, Raymond J.

    2010-01-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  3. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  4. Stochastic linear hybrid systems: Modeling, estimation, and application

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  5. Identification of an Equivalent Linear Model for a Non-Linear Time-Variant RC-Structure

    Kirkegaard, Poul Henning; Andersen, P.; Brincker, Rune

    are investigated and compared with ARMAX models used on a running window. The techniques are evaluated using simulated data generated by the non-linear finite element program SARCOF modeling a 10-storey 3-bay concrete structure subjected to amplitude modulated Gaussian white noise filtered through a Kanai......This paper considers estimation of the maximum softening for a RC-structure subjected to earthquake excitation. The so-called Maximum Softening damage indicator relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfrequency in an equivalent linear...

  6. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  7. Multi-state models: metapopulation and life history analyses

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  8. Linear least squares compartmental-model-independent parameter identification in PET

    Thie, J.A.; Smith, G.T.; Hubner, K.F.

    1997-01-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible

  9. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  10. Model-Based Recursive Partitioning for Subgroup Analyses.

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  11. Behavioral and macro modeling using piecewise linear techniques

    Kruiskamp, M.W.; Leenaerts, D.M.W.; Antao, B.

    1998-01-01

    In this paper we will demonstrate that most digital, analog as well as behavioral components can be described using piecewise linear approximations of their real behavior. This leads to several advantages from the viewpoint of simulation. We will also give a method to store the resulting linear

  12. Simultaneous Balancing and Model Reduction of Switched Linear Systems

    Monshizadeh, Nima; Trentelman, Hendrikus; Camlibel, M.K.

    2011-01-01

    In this paper, first, balanced truncation of linear systems is revisited. Then, simultaneous balancing of multiple linear systems is investigated. Necessary and sufficient conditions are introduced to identify the case where simultaneous balancing is possible. The validity of these conditions is not

  13. Genomic prediction based on data from three layer lines using non-linear regression models.

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  14. Comparison of the linear bias models in the light of the Dark Energy Survey

    Papageorgiou, A.; Basilakos, S.; Plionis, M.

    2018-05-01

    The evolution of the linear and scale independent bias, based on the most popular dark matter bias models within the Λ cold dark matter (ΛCDM) cosmology, is confronted to that of the Dark Energy Survey (DES) luminous red galaxies (LRGs). Applying a χ2 minimization procedure between models and data, we find that all the considered linear bias models reproduce well the LRG bias data. The differences among the bias models are absorbed in the predicted mass of the dark-matter halo in which LRGs live and which ranges between ˜6 × 1012 and 1.4 × 1013 h-1 M⊙, for the different bias models. Similar results, reaching however a maximum value of ˜2 × 1013 h-1 M⊙, are found by confronting the SDSS (2SLAQ) Large Red Galaxies clustering with theoretical clustering models, which also include the evolution of bias. This later analysis also provides a value of Ωm = 0.30 ± 0.01, which is in excellent agreement with recent joint analyses of different cosmological probes and the reanalysis of the Planck data.

  15. Sampled-data models for linear and nonlinear systems

    Yuz, Juan I

    2014-01-01

    Sampled-data Models for Linear and Nonlinear Systems provides a fresh new look at a subject with which many researchers may think themselves familiar. Rather than emphasising the differences between sampled-data and continuous-time systems, the authors proceed from the premise that, with modern sampling rates being as high as they are, it is becoming more appropriate to emphasise connections and similarities. The text is driven by three motives: ·      the ubiquity of computers in modern control and signal-processing equipment means that sampling of systems that really evolve continuously is unavoidable; ·      although superficially straightforward, sampling can easily produce erroneous results when not treated properly; and ·      the need for a thorough understanding of many aspects of sampling among researchers and engineers dealing with applications to which they are central. The authors tackle many misconceptions which, although appearing reasonable at first sight, are in fact either p...

  16. Dynamics of edge currents in a linearly quenched Haldane model

    Mardanya, Sougata; Bhattacharya, Utso; Agarwal, Amit; Dutta, Amit

    2018-03-01

    In a finite-time quantum quench of the Haldane model, the Chern number determining the topology of the bulk remains invariant, as long as the dynamics is unitary. Nonetheless, the corresponding boundary attribute, the edge current, displays interesting dynamics. For the case of sudden and adiabatic quenches the postquench edge current is solely determined by the initial and the final Hamiltonians, respectively. However for a finite-time (τ ) linear quench in a Haldane nanoribbon, we show that the evolution of the edge current from the sudden to the adiabatic limit is not monotonic in τ and has a turning point at a characteristic time scale τ =τ0 . For small τ , the excited states lead to a huge unidirectional surge in the edge current of both edges. On the other hand, in the limit of large τ , the edge current saturates to its expected equilibrium ground-state value. This competition between the two limits lead to the observed nonmonotonic behavior. Interestingly, τ0 seems to depend only on the Semenoff mass and the Haldane flux. A similar dynamics for the edge current is also expected in other systems with topological phases.

  17. Parameter estimation and hypothesis testing in linear models

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  18. Linear multivariate evaluation models for spatial perception of soundscape.

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  19. Form factors in the projected linear chiral sigma model

    Alberto, P.; Coimbra Univ.; Bochum Univ.; Ruiz Arriola, E.; Fiolhais, M.; Urbano, J.N.; Coimbra Univ.; Goeke, K.; Gruemmer, F.; Bochum Univ.

    1990-01-01

    Several nucleon form factors are computed within the framework of the linear chiral soliton model. To this end variational means and projection techniques applied to generalized hedgehog quark-boson Fock states are used. In this procedure the Goldberger-Treiman relation and a virial theorem for the pion-nucleon form factor are well fulfilled demonstrating the consistency of the treatment. Both proton and neutron charge form factors are correctly reproduced, as well as the proton magnetic one. The shapes of the neutron magnetic and of the axial form factors are good but their absolute values at the origin are too large. The slopes of all the form factors at zero momentum transfer are in good agreement with the experimental data. The pion-nucleon form factor exhibits to great extent a monopole shape with a cut-off mass of Λ=690 MeV. Electromagnetic form factors for the vertex γNΔ and the nucleon spin distribution are also evaluated and discussed. (orig.)

  20. Efficient EBE treatment of the dynamic far-field in non-linear FE soil-structure interaction analyses

    Crouch, R.S.; Bennett, T.

    2000-01-01

    This paper presents results and observations from the use of a rigorous method of treating the dynamic far-field as part of a non-linear FE analysis. The technique de-veloped by Wolf and Song (referred to as the Scaled Boundary Finite-Element Method) is incorporated into a 3-D time-domain analysis

  1. A theoretical model for analysing gender bias in medicine

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  2. A theoretical model for analysing gender bias in medicine.

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  3. Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models

    R. Barbiero

    2007-05-01

    Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.

  4. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    Martinez-Luaces, Victor

    2009-01-01

    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  5. An improved robust model predictive control for linear parameter-varying input-output models

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  6. A non-linear state space approach to model groundwater fluctuations

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.

    2006-01-01

    A non-linear state space model is developed for describing groundwater fluctuations. Non-linearity is introduced by modeling the (unobserved) degree of water saturation of the root zone. The non-linear relations are based on physical concepts describing the dependence of both the actual

  7. Half-trek criterion for generic identifiability of linear structural equation models

    Foygel, R.; Draisma, J.; Drton, M.

    2012-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  8. Half-trek criterion for generic identifiability of linear structural equation models

    Foygel, R.; Draisma, J.; Drton, M.

    2011-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  9. Control designs and stability analyses for Helly’s car-following model

    Rosas-Jaimes, Oscar A.; Quezada-Téllez, Luis A.; Fernández-Anaya, Guillermo

    Car-following is an approach to understand traffic behavior restricted to pairs of cars, identifying a “leader” moving in front of a “follower”, which at the same time, it is assumed that it does not surpass to the first one. From the first attempts to formulate the way in which individual cars are affected in a road through these models, linear differential equations were suggested by author like Pipes or Helly. These expressions represent such phenomena quite well, even though they have been overcome by other more recent and accurate models. However, in this paper, we show that those early formulations have some properties that are not fully reported, presenting the different ways in which they can be expressed, and analyzing them in their stability behaviors. Pipes’ model can be extended to what it is known as Helly’s model, which is viewed as a more precise model to emulate this microscopic approach to traffic. Once established some convenient forms of expression, two control designs are suggested herein. These regulation schemes are also complemented with their respective stability analyses, which reflect some important properties with implications in real driving. It is significant that these linear designs can be very easy to understand and to implement, including those important features related to safety and comfort.

  10. On-line validation of linear process models using generalized likelihood ratios

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  11. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  12. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  13. A linear goal programming model for urban energy-economy-environment interaction

    Kambo, N.S.; Handa, B.R. (Indian Inst. of Tech., New Delhi (India). Dept. of Mathematics); Bose, R.K. (Tata Energy Research Inst., New Delhi (India))

    1991-01-01

    This paper provides a comprehensive and systematic analysis of energy and pollution problems interconnected with the economic structure, by using a multi-objective sectoral end-use model for addressing regional energy policy issues. The multi-objective model proposed for the study is a 'linear goal programming (LGP)' technique of analysing a 'reference energy system (RES)' in a framework within which alternative policies and technical strategies may be evaluated. The model so developed has further been tested for the city of Delhi (India) for the period 1985 - 86, and a scenario analysis has been carried out by assuming different policy options. (orig./BWJ).

  14. Simultaneous Balancing and Model Reduction of Switched Linear Systems

    Monshizadeh, Nima; Trentelman, Hendrikus; Camlibel, M.K.

    2011-01-01

    In this paper, first, balanced truncation of linear systems is revisited. Then, simultaneous balancing of multiple linear systems is investigated. Necessary and sufficient conditions are introduced to identify the case where simultaneous balancing is possible. The validity of these conditions is not limited to a certain type of balancing, and they are applicable for different types of balancing corresponding to different equations, like Lyapunov or Riccati equations. The results obtained are ...

  15. Developing ontological model of computational linear algebra - preliminary considerations

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Lirkov, I.

    2013-10-01

    The aim of this paper is to propose a method for application of ontologically represented domain knowledge to support Grid users. The work is presented in the context provided by the Agents in Grid system, which aims at development of an agent-semantic infrastructure for efficient resource management in the Grid. Decision support within the system should provide functionality beyond the existing Grid middleware, specifically, help the user to choose optimal algorithm and/or resource to solve a problem from a given domain. The system assists the user in at least two situations. First, for users without in-depth knowledge about the domain, it should help them to select the method and the resource that (together) would best fit the problem to be solved (and match the available resources). Second, if the user explicitly indicates the method and the resource configuration, it should "verify" if her choice is consistent with the expert recommendations (encapsulated in the knowledge base). Furthermore, one of the goals is to simplify the use of the selected resource to execute the job; i.e., provide a user-friendly method of submitting jobs, without required technical knowledge about the Grid middleware. To achieve the mentioned goals, an adaptable method of expert knowledge representation for the decision support system has to be implemented. The selected approach is to utilize ontologies and semantic data processing, supported by multicriterial decision making. As a starting point, an area of computational linear algebra was selected to be modeled, however, the paper presents a general approach that shall be easily extendable to other domains.

  16. Symmetry conservation in the linear chiral soliton model

    Goeke, K.

    1988-01-01

    The linear chiral soliton model with quark fields and elementary pion- and sigma-fields is solved in order to describe static properties of the nucleon and the delta resonance. To this end a Fock-state of the system is constructed consisting out of three valence quarks in a first orbit with a generalized hedgehog spin-flavour configuration. Coherent states are used to provide a quantum description for the mesonic parts of the total wave function. The corresponding classical pion field also exhibit a generalized hedgehog structure. In a pure mean field approximation the variation of the total energy results in the ordinary hedgehog form. In a quantized approach the generalized hedgehog-baryon is projected onto states with good spin and isospin and then noticeable deviations from the simple hedgehog form, if the relevant degrees of freedom of the wave function are varied after the projection. Various nucleon properties are calculated. These include proton and neutron charge radii, and the magnetic moment of the proton for which good agreement with experiment is obtained. The absolute value of the neutron magnetic moment comes out too large, similarly as the axial vector coupling constant and the pion-nucleon-nucleon coupling constant.To the generalization of the hedgehog the Goldberger-Treiman relation and a corresponding virial theorem are fulfilled. Variation of the quark-meson coupling parameter g and the sigma mass m σ shows that the g A is always at least 40 % too large compared to experiment. Hence it is concluded that either the inclusion of the polarization of the Dirac sea and/or further mesons with may be vector character or the consideration of intrinsic deformation is necessary. The concepts and results of the projections are compared with the semiclassical collective quantization method. 6 tabs., 14 figs., 43 refs

  17. Beta-Poisson model for single-cell RNA-seq data analyses.

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Impact of sophisticated fog spray models on accident analyses

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  19. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  20. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. The Linearity of Optical Tomography: Sensor Model and Experimental Verification

    Siti Zarina MOHD. MUJI

    2011-09-01

    Full Text Available The aim of this paper is to show the linearization of optical sensor. Linearity of the sensor response is a must in optical tomography application, which affects the tomogram result. Two types of testing are used namely, testing using voltage parameter and testing with time unit parameter. For the former, the testing is by measuring the voltage when the obstacle is placed between transmitter and receiver. The obstacle diameters are between 0.5 until 3 mm. The latter is also the same testing but the obstacle is bigger than the former which is 59.24 mm and the testing purpose is to measure the time unit spend for the ball when it cut the area of sensing circuit. Both results show a linear relation that proves the optical sensors is suitable for process tomography application.

  2. Analyses of genetic relationships between linear type traits, fat-to-protein ratio, milk production traits, and somatic cell count in first-parity Czech Holstein cows

    Zink, V; Zavadilová, L; Lassen, Jan

    2014-01-01

    . The number of animals for each linear type trait was 59 454, except for locomotion, for which 53 424 animals were recorded. The numbers of animals with records of milk production data were 43 992 for milk yield, fat percentage, protein percentage, and fat-to-protein percentage ratio and 43 978 for fat yield...... and protein yield. In total, 27 098 somatic cell score records were available. The strongest positive genetic correlation between production traits and linear type traits was estimated between udder width and fat yield (0.51 ± 0.04), while the strongest negative correlation estimated was between body......Genetic and phenotypic correlations between production traits, selected linear type traits, and somatic cell score were estimated. The results could be useful for breeding programs involving Czech Holstein dairy cows or other populations. A series of bivariate analyses was applied whereby (co...

  3. A dialogue game for analysing group model building: framing collaborative modelling and its facilitation

    Hoppenbrouwers, S.J.B.A.; Rouwette, E.A.J.A.

    2012-01-01

    This paper concerns a specific approach to analysing and structuring operational situations in collaborative modelling. Collaborative modelling is viewed here as 'the goal-driven creation and shaping of models that are based on the principles of rational description and reasoning'. Our long term

  4. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second

  5. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  6. Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control

    Zhou, Jianjun; Conrad, Finn

    1989-01-01

    The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input......-output behaviour of self-tuning controllers....

  7. Excited scalar and pseudoscalar mesons in the extended linear sigma model

    Parganlija, Denis [Technische Universitaet Wien, Institut fuer Theoretische Physik, Vienna (Austria); Giacosa, Francesco [Jan Kochanowski University, Institute of Physics, Kielce (Poland); Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany)

    2017-07-15

    We present an in-depth study of masses and decays of excited scalar and pseudoscalar anti qq states in the Extended Linear Sigma Model (eLSM). The model also contains ground-state scalar, pseudoscalar, vector and axial-vector mesons. The main objective is to study the consequences of the hypothesis that the f{sub 0}(1790) resonance, observed a decade ago by the BES Collaboration and recently by LHCb, represents an excited scalar quarkonium. In addition we also analyse the possibility that the new a{sub 0}(1950) resonance, observed recently by BABAR, may also be an excited scalar state. Both hypotheses receive justification in our approach although there appears to be some tension between the simultaneous interpretation of f{sub 0}(1790)/a{sub 0}(1950) and pseudoscalar mesons η(1295), π(1300), η(1440) and K(1460) as excited anti qq states. (orig.)

  8. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  9. Evaluation of the quality consistency of powdered poppy capsule extractive by an averagely linear-quantified fingerprint method in combination with antioxidant activities and two compounds analyses.

    Zhang, Yujing; Sun, Guoxiang; Hou, Zhifei; Yan, Bo; Zhang, Jing

    2017-12-01

    A novel averagely linear-quantified fingerprint method was proposed and successfully applied to monitor the quality consistency of alkaloids in powdered poppy capsule extractive. Averagely linear-quantified fingerprint method provided accurate qualitative and quantitative similarities for chromatographic fingerprints of Chinese herbal medicines. The stability and operability of the averagely linear-quantified fingerprint method were verified by the parameter r. The average linear qualitative similarity SL (improved based on conventional qualitative "Similarity") was used as a qualitative criterion in the averagely linear-quantified fingerprint method, and the average linear quantitative similarity PL was introduced as a quantitative one. PL was able to identify the difference in the content of all the chemical components. In addition, PL was found to be highly correlated to the contents of two alkaloid compounds (morphine and codeine). A simple flow injection analysis was developed for the determination of antioxidant capacity in Chinese Herbal Medicines, which was based on the scavenging of 2,2-diphenyl-1-picrylhydrazyl radical by antioxidants. The fingerprint-efficacy relationship linking chromatographic fingerprints and antioxidant activities was investigated utilizing orthogonal projection to latent structures method, which provided important pharmacodynamic information for Chinese herbal medicines quality control. In summary, quantitative fingerprinting based on averagely linear-quantified fingerprint method can be applied for monitoring the quality consistency of Chinese herbal medicines, and the constructed orthogonal projection to latent structures model is particularly suitable for investigating the fingerprint-efficacy relationship. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A wild model of linear arithmetic and discretely ordered modules

    Glivický, Petr; Pudlák, Pavel

    2017-01-01

    Roč. 63, č. 6 (2017), s. 501-508 ISSN 0942-5616 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : linear arithmetics Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.250, year: 2016

  11. Evaluation of linear induction motor characteristics : the Yamamura model

    1975-04-30

    The Yamamura theory of the double-sided linear induction motor (LIM) excited by a constant current source is discussed in some detail. The report begins with a derivation of thrust and airgap power using the method of vector potentials and theorem of...

  12. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  14. Model structure learning: A support vector machine approach for LPV linear-regression models

    Toth, R.; Laurain, V.; Zheng, W-X.; Poolla, K.

    2011-01-01

    Accurate parametric identification of Linear Parameter-Varying (LPV) systems requires an optimal prior selection of a set of functional dependencies for the parametrization of the model coefficients. Inaccurate selection leads to structural bias while over-parametrization results in a variance

  15. Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

    Shangli Zhang

    2009-01-01

    Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

  16. Preisach hysteresis model for non-linear 2D heat diffusion

    Jancskar, Ildiko; Ivanyi, Amalia

    2006-01-01

    This paper analyzes a non-linear heat diffusion process when the thermal diffusivity behaviour is a hysteretic function of the temperature. Modelling this temperature dependence, the discrete Preisach algorithm as general hysteresis model has been integrated into a non-linear multigrid solver. The hysteretic diffusion shows a heating-cooling asymmetry in character. The presented type of hysteresis speeds up the thermal processes in the modelled systems by a very interesting non-linear way

  17. Modeling of rail track substructure linear elastic coupling

    2015-09-30

    Most analyses of rail dynamics neglect contribution of the soil, or treat it in a very simple manner such as using spring elements. This can cause accuracy issues in examining dynamics for passenger comfort, derailment, substructure analysis, or othe...

  18. Modelling of the thermal parameters of high-power linear laser-diode arrays. Two-dimensional transient model

    Bezotosnyi, V V; Kumykov, Kh Kh

    1998-01-01

    A two-dimensional transient thermal model of an injection laser is developed. This model makes it possible to analyse the temperature profiles in pulsed and cw stripe lasers with an arbitrary width of the stripe contact, and also in linear laser-diode arrays. This can be done for any durations and repetition rates of the pump pulses. The model can also be applied to two-dimensional laser-diode arrays operating quasicontinuously. An analysis is reported of the influence of various structural parameters of a diode array on the thermal regime of a single laser. The temperature distributions along the cavity axis are investigated for different variants of mounting a crystal on a heat sink. It is found that the temperature drop along the cavity length in cw and quasi-cw laser diodes may exceed 20%. (lasers)

  19. On the interpretation of weight vectors of linear models in multivariate neuroimaging.

    Haufe, Stefan; Meinecke, Frank; Görgen, Kai; Dähne, Sven; Haynes, John-Dylan; Blankertz, Benjamin; Bießmann, Felix

    2014-02-15

    models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Non-linear nuclear engineering models as genetic programming application

    Domingos, Roberto P.; Schirru, Roberto; Martinez, Aquilino S.

    1997-01-01

    This work presents a Genetic Programming paradigm and a nuclear application. A field of Artificial Intelligence, based on the concepts of Species Evolution and Natural Selection, can be understood as a self-programming process where the computer is the main agent responsible for the discovery of a program able to solve a given problem. In the present case, the problem was to find a mathematical expression in symbolic form, able to express the existent relation between equivalent ratio of a fuel cell, the enrichment of fuel elements and the multiplication factor. Such expression would avoid repeatedly reactor physics codes execution for core optimization. The results were compared with those obtained by different techniques such as Neural Networks and Linear Multiple Regression. Genetic Programming has shown to present a performance as good as, and under some features superior to Neural Network and Linear Multiple Regression. (author). 10 refs., 8 figs., 1 tabs

  1. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  2. Generation companies decision-making modeling by linear control theory

    Gutierrez-Alcaraz, G.; Sheble, Gerald B.

    2010-01-01

    This paper proposes four decision-making procedures to be employed by electric generating companies as part of their bidding strategies when competing in an oligopolistic market: naive, forward, adaptive, and moving average expectations. Decision-making is formulated in a dynamic framework by using linear control theory. The results reveal that interactions among all GENCOs affect market dynamics. Several numerical examples are reported, and conclusions are presented. (author)

  3. A Comparison of Alternative Estimators of Linearly Aggregated Macro Models

    Fikri Akdeniz

    2012-07-01

    Full Text Available Normal 0 false false false TR X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-ansi-language:TR; mso-fareast-language:TR;} This paper deals with the linear aggregation problem. For the true underlying micro relations, which explain the micro behavior of the individuals, no restrictive rank conditions are assumed. Thus the analysis is presented in a framework utilizing generalized inverses of singular matrices. We investigate several estimators for certain linear transformations of the systematic part of the corresponding macro relations. Homogeneity of micro parameters is discussed. Best linear unbiased estimation for micro parameters is described.

  4. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  5. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  6. Modelling and analysing interoperability in service compositions using COSMO

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  7. Mixed models, linear dependency, and identification in age-period-cohort models.

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Muayad Al-Qaisy

    2015-02-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  10. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. The oscillatory behavior of heated channels: an analysis of the density effect. Part I. The mechanism (non linear analysis). Part II. The oscillations thresholds (linearized analysis); Sur le comportement oscillatoire de canaux chauffes. - Etude theorique de l'effet de densite. 1ere partie: le mecanisme (analyse non lineaire), 2eme partie: seuils d'oscillation (analyse lineaire)

    Boure, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Grenoble, 38 (France)

    1967-07-01

    The problem of the oscillatory behavior of heated channels is presented in terms of delay-times and a density effect model is proposed to explain the behavior. The density effect is the consequence of the physical relationship between enthalpy and density of the fluid. In the first part non-linear equations are derived from the model in a dimensionless form. A description of the mechanism of oscillations is given, based on the analysis of the equations. An inventory of the governing parameters is established. At this point of the study, some facts in agreement with the experiments can be pointed out. In the second part the start of the oscillatory behavior of heated channels is studied in terms of the density effect. The threshold equations are derived, after linearization of the equations obtained in Part I. They can be solved rigorously by numerical methods to yield: -1) a relation between the describing parameters at the onset of oscillations, and -2) the frequency of the oscillations. By comparing the results predicted by the model to the experimental behavior of actual systems, the density effect is very often shown to be the actual cause of oscillatory behaviors. (author) [French] Premiere partie: mecanisme (equations non linearisees). On expose le probleme du comportement oscillatoire des canaux chauffes en mettant l'accent sur la presence de retards dans le systeme et on propose un modele a 'effet de densite' pour expliquer ce comportement. L'effet de densite est la consequence de la relation physique entre l'enthalpie et la masse volumique du fluide. Les equations non lineaires du schema mathematique correspondant sont etablies et mises sous forme adimensionnelle. L'analyse de ces equations conduit a une description du mecanisme des oscillations. On donne la liste des parametres dont depend le comportement du modele. A ce stade de l'etude, on peut deja relever dans ce comportement plusieurs faits conformes a l'experience. Deuxieme partie: seuils d

  12. Utility of low-order linear nuclear-power-plant models in plant diagnostics and control

    Tylee, J.L.

    1981-01-01

    A low-order, linear model of a pressurized water reactor (PWR) plant is described and evaluated. The model consists of 23 linear, first-order difference equations and simulates all subsystems of both the primary and secondary sides of the plant. Comparisons between the calculated model response and available test data show the model to be an adequate representation of the actual plant dynamics. Suggested use for the model in an on-line digital plant diagnostics and control system are presented

  13. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  14. Some computer simulations based on the linear relative risk model

    Gilbert, E.S.

    1991-10-01

    This report presents the results of computer simulations designed to evaluate and compare the performance of the likelihood ratio statistic and the score statistic for making inferences about the linear relative risk mode. The work was motivated by data on workers exposed to low doses of radiation, and the report includes illustration of several procedures for obtaining confidence limits for the excess relative risk coefficient based on data from three studies of nuclear workers. The computer simulations indicate that with small sample sizes and highly skewed dose distributions, asymptotic approximations to the score statistic or to the likelihood ratio statistic may not be adequate. For testing the null hypothesis that the excess relative risk is equal to zero, the asymptotic approximation to the likelihood ratio statistic was adequate, but use of the asymptotic approximation to the score statistic rejected the null hypothesis too often. Frequently the likelihood was maximized at the lower constraint, and when this occurred, the asymptotic approximations for the likelihood ratio and score statistics did not perform well in obtaining upper confidence limits. The score statistic and likelihood ratio statistics were found to perform comparably in terms of power and width of the confidence limits. It is recommended that with modest sample sizes, confidence limits be obtained using computer simulations based on the score statistic. Although nuclear worker studies are emphasized in this report, its results are relevant for any study investigating linear dose-response functions with highly skewed exposure distributions. 22 refs., 14 tabs

  15. Partially linear varying coefficient models stratified by a functional covariate

    Maity, Arnab; Huang, Jianhua Z.

    2012-01-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

  16. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  17. A linear time layout algorithm for business process models

    Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.

    2014-01-01

    The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is

  18. Free-piston engine linear generator for hybrid vehicles modeling study

    Callahan, T. J.; Ingram, S. K.

    1995-05-01

    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  19. Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It

    Grünwald, P.; van Ommen, T.

    2017-01-01

    We empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data are

  20. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  1. Genomic prediction based on data from three layer lines using non-linear regression models

    Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.

    2014-01-01

    Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

  2. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2012-01-01

    Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…

  3. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it

    P.D. Grünwald (Peter); T. van Ommen (Thijs)

    2017-01-01

    textabstractWe empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data

  4. Non-linear characterisation of the physical model of an ancient masonry bridge

    Fragonara, L Zanotti; Ceravolo, R; Matta, E; Quattrone, A; De Stefano, A; Pecorelli, M

    2012-01-01

    This paper presents the non-linear investigations carried out on a scaled model of a two-span masonry arch bridge. The model has been built in order to study the effect of the central pile settlement due to riverbank erosion. Progressive damage was induced in several steps by applying increasing settlements at the central pier. For each settlement step, harmonic shaker tests were conducted under different excitation levels, this allowing for the non-linear identification of the progressively damaged system. The shaker tests have been performed at resonance with the modal frequency of the structure, which were determined from a previous linear identification. Estimated non-linearity parameters, which result from the systematic application of restoring force based identification algorithms, can corroborate models to be used in the reassessment of existing structures. The method used for non-linear identification allows monitoring the evolution of non-linear parameters or indicators which can be used in damage and safety assessment.

  5. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  6. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  7. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  8. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  9. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  10. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  11. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  12. Analysing Models as a Knowledge Technology in Transport Planning

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  13. Modeling exposure–lag–response associations with distributed lag non-linear models

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  14. Linear regression models for quantitative assessment of left ...

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  15. Non-linear modeling of active biohybrid materials

    Paetsch, C.; Dorfmann, A.

    2013-01-01

    , such as those of Manduca sexta. In this study, we propose a model to assist in the analysis of biohybrid constructs by generalizing a recently proposed constitutive law for Manduca muscle tissue. The continuum model accounts (i) for the stimulation of muscle

  16. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  17. Nonlinearity measure and internal model control based linearization in anti-windup design

    Perev, Kamen [Systems and Control Department, Technical University of Sofia, 8 Cl. Ohridski Blvd., 1756 Sofia (Bulgaria)

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequency ranges.

  18. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in NLME using differential equations

    Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik

    2004-01-01

    The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...

  19. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  20. A linear programming model to optimize diets in environmental policy scenarios.

    Moraes, L E; Wilen, J E; Robinson, P H; Fadel, J G

    2012-03-01

    The objective was to develop a linear programming model to formulate diets for dairy cattle when environmental policies are present and to examine effects of these policies on diet formulation and dairy cattle nitrogen and mineral excretions as well as methane emissions. The model was developed as a minimum cost diet model. Two types of environmental policies were examined: a tax and a constraint on methane emissions. A tax was incorporated to simulate a greenhouse gas emissions tax policy, and prices of carbon credits in the current carbon markets were attributed to the methane production variable. Three independent runs were made, using carbon dioxide equivalent prices of $5, $17, and $250/t. A constraint was incorporated into the model to simulate the second type of environmental policy, reducing methane emissions by predetermined amounts. The linear programming formulation of this second alternative enabled the calculation of marginal costs of reducing methane emissions. Methane emission and manure production by dairy cows were calculated according to published equations, and nitrogen and mineral excretions were calculated by mass conservation laws. Results were compared with respect to the values generated by a base least-cost model. Current prices of the carbon credit market did not appear onerous enough to have a substantive incentive effect in reducing methane emissions and altering diet costs of our hypothetical dairy herd. However, when emissions of methane were assumed to be reduced by 5, 10, and 13.5% from the base model, total diet costs increased by 5, 19.1, and 48.5%, respectively. Either these increased costs would be passed onto the consumer or dairy producers would go out of business. Nitrogen and potassium excretions were increased by 16.5 and 16.7% with a 13.5% reduction in methane emissions from the base model. Imposing methane restrictions would further increase the demand for grains and other human-edible crops, which is not a progressive

  1. Real-time prediction of extreme ambient carbon monoxide concentrations due to vehicular exhaust emissions using univariate linear stochastic models

    Sharma, P.; Khare, M.

    2000-01-01

    Historical data of the time-series of carbon monoxide (CO) concentration was analysed using Box-Jenkins modelling approach. Univariate Linear Stochastic Models (ULSMs) were developed to examine the degree of prediction possible for situations where only a limited data set, restricted only to the past record of pollutant data are available. The developed models can be used to provide short-term, real-time forecast of extreme CO concentrations for an Air Quality Control Region (AQCR), comprising a major traffic intersection in a Central Business District of Delhi City, India. (author)

  2. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  3. A shock absorber model for structure-borne noise analyses

    Benaziz, Marouane; Nacivet, Samuel; Thouverez, Fabrice

    2015-08-01

    Shock absorbers are often responsible for undesirable structure-borne noise in cars. The early numerical prediction of this noise in the automobile development process can save time and money and yet remains a challenge for industry. In this paper, a new approach to predicting shock absorber structure-borne noise is proposed; it consists in modelling the shock absorber and including the main nonlinear phenomena responsible for discontinuities in the response. The model set forth herein features: compressible fluid behaviour, nonlinear flow rate-pressure relations, valve mechanical equations and rubber mounts. The piston, base valve and complete shock absorber model are compared with experimental results. Sensitivity of the shock absorber response is evaluated and the most important parameters are classified. The response envelope is also computed. This shock absorber model is able to accurately reproduce local nonlinear phenomena and improves our state of knowledge on potential noise sources within the shock absorber.

  4. Plasma-safety assessment model and safety analyses of ITER

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  5. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  6. Partially linear varying coefficient models stratified by a functional covariate

    Maity, Arnab

    2012-10-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.

  7. Analysing earthquake slip models with the spatial prediction comparison test

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  8. Analysing earthquake slip models with the spatial prediction comparison test

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  9. Modeling results for a linear simulator of a divertor

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-01-01

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach ∼ 1 Gw/m 2 along the magnetic fieldlines and > 10 MW/m 2 on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report

  10. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  11. Modeling winter precipitation over the Juneau Icefield, Alaska, using a linear model of orographic precipitation

    Roth, Aurora; Hock, Regine; Schuler, Thomas V.; Bieniek, Peter A.; Pelto, Mauri; Aschwanden, Andy

    2018-03-01

    Assessing and modeling precipitation in mountainous areas remains a major challenge in glacier mass balance modeling. Observations are typically scarce and reanalysis data and similar climate products are too coarse to accurately capture orographic effects. Here we use the linear theory of orographic precipitation model (LT model) to downscale winter precipitation from a regional climate model over the Juneau Icefield, one of the largest ice masses in North America (>4000 km2), for the period 1979-2013. The LT model is physically-based yet computationally efficient, combining airflow dynamics and simple cloud microphysics. The resulting 1 km resolution precipitation fields show substantially reduced precipitation on the northeastern portion of the icefield compared to the southwestern side, a pattern that is not well captured in the coarse resolution (20 km) WRF data. Net snow accumulation derived from the LT model precipitation agrees well with point observations across the icefield. To investigate the robustness of the LT model results, we perform a series of sensitivity experiments varying hydrometeor fall speeds, the horizontal resolution of the underlying grid, and the source of the meteorological forcing data. The resulting normalized spatial precipitation pattern is similar for all sensitivity experiments, but local precipitation amounts vary strongly, with greatest sensitivity to variations in snow fall speed. Results indicate that the LT model has great potential to provide improved spatial patterns of winter precipitation for glacier mass balance modeling purposes in complex terrain, but ground observations are necessary to constrain model parameters to match total amounts.

  12. Non-linear modeling of active biohybrid materials

    Paetsch, C.

    2013-11-01

    Recent advances in engineered muscle tissue attached to a synthetic substrate motivate the development of appropriate constitutive and numerical models. Applications of active materials can be expanded by using robust, non-mammalian muscle cells, such as those of Manduca sexta. In this study, we propose a model to assist in the analysis of biohybrid constructs by generalizing a recently proposed constitutive law for Manduca muscle tissue. The continuum model accounts (i) for the stimulation of muscle fibers by introducing multiple stress-free reference configurations for the active and passive states and (ii) for the hysteretic response by specifying a pseudo-elastic energy function. A simple example representing uniaxial loading-unloading is used to validate and verify the characteristics of the model. Then, based on experimental data of muscular thin films, a more complex case shows the qualitative potential of Manduca muscle tissue in active biohybrid constructs. © 2013 Elsevier Ltd. All rights reserved.

  13. Eddy current modeling in linear and nonlinear multifilamentary composite materials

    Menana, Hocine; Farhat, Mohamad; Hinaje, Melika; Berger, Kevin; Douine, Bruno; Lévêque, Jean

    2018-04-01

    In this work, a numerical model is developed for a rapid computation of eddy currents in composite materials, adaptable for both carbon fiber reinforced polymers (CFRPs) for NDT applications and multifilamentary high temperature superconductive (HTS) tapes for AC loss evaluation. The proposed model is based on an integro-differential formulation in terms of the electric vector potential in the frequency domain. The high anisotropy and the nonlinearity of the considered materials are easily handled in the frequency domain.

  14. Operator-based linearization for efficient modeling of geothermal processes

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical issue. Geothermal reservoir modeling requires the solution of governing equations describing the conservation of mass and energy. The robust, accurate and computationally efficient implementation of ...

  15. Evaluating significance in linear mixed-effects models in R.

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  16. Linear and quadrature models for data from treshold measurements of the transient visual system

    Brinker, den A.C.

    1986-01-01

    III this paper two models are considered for the transient visual system at threshold. One is a linear model and the other a model contain ing a quadrature element. Both models are commonly used on evidence from different experimental sources. It is shown that both models act in a similar fashion

  17. A versatile curve-fit model for linear to deeply concave rank abundance curves

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  18. Business models for telehealth in the US: analyses and insights

    Pereira F

    2017-02-01

    Full Text Available Francis Pereira Data Sciences and Operations, Marshall School of Business, University of Southern, Los Angeles, CA, USAAbstract: A growing shortage of medical doctors and nurses, globally, coupled with increasing life expectancy, is generating greater cost pressures on health care, in the US and globally. In this respect, telehealth can help alleviate these pressures, as well as extend medical services to underserved or unserved areas. However, its relatively slow adoption in the US, as well as in other markets, suggests the presence of barriers and challenges. The use of a business model framework helps identify the value proposition of telehealth as well as these challenges, which include identifying the right revenue model, organizational structure, and, perhaps more importantly, the stakeholders in the telehealth ecosystem. Successful and cost-effective deployment of telehealth require a redefinition of the ecosystem and a comprehensive review of all benefits and beneficiaries of such a system; hence a reassessment of all the stakeholders that could benefit from such a system, beyond the traditional patient–health provider–insurer model, and thus “who should pay” for such a system, and the driving efforts of a “keystone” player in developing this initiative would help. Keywords: telehealth, business model framework, stakeholders, ecosystem, VISOR business Model

  19. Genetic demixing and evolution in linear stepping stone models

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  20. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.